Test Report: Hyper-V_Windows 16214

                    
                      32a2e10b32a6388859c743812a16146a9af35ea5:2024-03-08:33460
                    
                

Test fail (14/216)

x
+
TestAddons/parallel/Registry (71.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.4176ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vq7gp" [15283e17-0641-48c8-bed6-7b74b3939a32] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0086326s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gggz5" [f077c346-86c0-44cb-93fa-374ceed6f5c6] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0194973s
addons_test.go:340: (dbg) Run:  kubectl --context addons-723800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-723800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-723800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.5983483s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 ip: (2.5099156s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0307 22:44:55.321623   12352 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-723800 ip"
2024/03/07 22:44:57 [DEBUG] GET http://172.20.63.241:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable registry --alsologtostderr -v=1: (13.9986946s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-723800 -n addons-723800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-723800 -n addons-723800: (12.4641422s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 logs -n 25: (10.1738023s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-244600                                                                     | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | -p download-only-219100                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-219100                                                                     | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-409000 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | -p download-only-409000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-409000                                                                     | download-only-409000 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-244600                                                                     | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-219100                                                                     | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-409000                                                                     | download-only-409000 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-201700 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | binary-mirror-201700                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:54908                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-201700                                                                     | binary-mirror-201700 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | addons-723800                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | addons-723800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-723800 --wait=true                                                                | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:44 UTC | 07 Mar 24 22:45 UTC |
	|         | addons-723800                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-723800 addons                                                                        | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:44 UTC | 07 Mar 24 22:44 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-723800 addons disable                                                                | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:44 UTC | 07 Mar 24 22:45 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ip      | addons-723800 ip                                                                            | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:44 UTC | 07 Mar 24 22:44 UTC |
	| addons  | addons-723800 addons disable                                                                | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:44 UTC | 07 Mar 24 22:45 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-723800 ssh curl -s                                                                   | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:45 UTC | 07 Mar 24 22:45 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:45 UTC |                     |
	|         | -p addons-723800                                                                            |                      |                   |         |                     |                     |
	| ssh     | addons-723800 ssh cat                                                                       | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:45 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-9400bf85-94ed-489b-a648-5551c6e089a1_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-723800 ip                                                                            | addons-723800        | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:45 UTC |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:38:42
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:38:42.875291    7476 out.go:291] Setting OutFile to fd 804 ...
	I0307 22:38:42.875944    7476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:42.875944    7476 out.go:304] Setting ErrFile to fd 800...
	I0307 22:38:42.875944    7476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:42.895576    7476 out.go:298] Setting JSON to false
	I0307 22:38:42.898100    7476 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10077,"bootTime":1709841045,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 22:38:42.898100    7476 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 22:38:42.902703    7476 out.go:177] * [addons-723800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 22:38:42.907548    7476 notify.go:220] Checking for updates...
	I0307 22:38:42.907609    7476 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:38:42.910002    7476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 22:38:42.913131    7476 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 22:38:42.915914    7476 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 22:38:42.918553    7476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 22:38:42.921325    7476 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:38:47.870905    7476 out.go:177] * Using the hyperv driver based on user configuration
	I0307 22:38:47.884581    7476 start.go:297] selected driver: hyperv
	I0307 22:38:47.886948    7476 start.go:901] validating driver "hyperv" against <nil>
	I0307 22:38:47.886948    7476 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 22:38:47.932224    7476 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 22:38:47.933106    7476 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 22:38:47.933106    7476 cni.go:84] Creating CNI manager for ""
	I0307 22:38:47.933106    7476 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:38:47.933106    7476 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 22:38:47.933727    7476 start.go:340] cluster config:
	{Name:addons-723800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-723800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:38:47.933996    7476 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:38:47.939258    7476 out.go:177] * Starting "addons-723800" primary control-plane node in "addons-723800" cluster
	I0307 22:38:47.941648    7476 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 22:38:47.941648    7476 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 22:38:47.941648    7476 cache.go:56] Caching tarball of preloaded images
	I0307 22:38:47.942272    7476 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 22:38:47.942297    7476 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 22:38:47.942886    7476 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\config.json ...
	I0307 22:38:47.943216    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\config.json: {Name:mke8fbb6c60aadb0127cf54dcd6967337f32a298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:38:47.944161    7476 start.go:360] acquireMachinesLock for addons-723800: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 22:38:47.944758    7476 start.go:364] duration metric: took 566.7µs to acquireMachinesLock for "addons-723800"
	I0307 22:38:47.944758    7476 start.go:93] Provisioning new machine with config: &{Name:addons-723800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:addons-723800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 22:38:47.944758    7476 start.go:125] createHost starting for "" (driver="hyperv")
	I0307 22:38:47.949989    7476 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0307 22:38:47.950608    7476 start.go:159] libmachine.API.Create for "addons-723800" (driver="hyperv")
	I0307 22:38:47.950608    7476 client.go:168] LocalClient.Create starting
	I0307 22:38:47.951287    7476 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 22:38:48.101328    7476 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 22:38:48.246249    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 22:38:50.094019    7476 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 22:38:50.094149    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:38:50.094149    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 22:38:51.568969    7476 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 22:38:51.574706    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:38:51.574772    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 22:38:52.788719    7476 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 22:38:52.788719    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:38:52.795135    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 22:38:55.873116    7476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 22:38:55.882879    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:38:55.885237    7476 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 22:38:56.323095    7476 main.go:141] libmachine: Creating SSH key...
	I0307 22:38:56.418022    7476 main.go:141] libmachine: Creating VM...
	I0307 22:38:56.418022    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 22:38:58.779229    7476 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 22:38:58.788894    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:38:58.788894    7476 main.go:141] libmachine: Using switch "Default Switch"
	I0307 22:38:58.788894    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 22:39:00.229946    7476 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 22:39:00.235362    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:00.235362    7476 main.go:141] libmachine: Creating VHD
	I0307 22:39:00.235362    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 22:39:03.498328    7476 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 97A61022-3E8B-4C63-BB38-88F24A8BCB47
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 22:39:03.498328    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:03.498328    7476 main.go:141] libmachine: Writing magic tar header
	I0307 22:39:03.498328    7476 main.go:141] libmachine: Writing SSH key tar header
	I0307 22:39:03.507292    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 22:39:06.336214    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:06.345238    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:06.345324    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\disk.vhd' -SizeBytes 20000MB
	I0307 22:39:08.523919    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:08.535894    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:08.535894    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-723800 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0307 22:39:11.693896    7476 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-723800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 22:39:11.693952    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:11.693952    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-723800 -DynamicMemoryEnabled $false
	I0307 22:39:13.566736    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:13.566736    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:13.566736    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-723800 -Count 2
	I0307 22:39:15.336320    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:15.336320    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:15.336512    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-723800 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\boot2docker.iso'
	I0307 22:39:17.504301    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:17.513636    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:17.513636    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-723800 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\disk.vhd'
	I0307 22:39:19.703545    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:19.703545    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:19.703545    7476 main.go:141] libmachine: Starting VM...
	I0307 22:39:19.713012    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-723800
	I0307 22:39:22.490980    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:22.490980    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:22.490980    7476 main.go:141] libmachine: Waiting for host to start...
	I0307 22:39:22.490980    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:24.454095    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:24.454290    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:24.454290    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:26.625311    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:26.625311    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:27.632856    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:29.610592    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:29.610764    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:29.610838    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:31.800459    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:31.800459    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:32.821434    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:34.730643    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:34.730643    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:34.735972    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:36.974992    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:36.976587    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:37.990973    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:39.885871    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:39.885871    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:39.887861    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:42.138397    7476 main.go:141] libmachine: [stdout =====>] : 
	I0307 22:39:42.147902    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:43.150233    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:45.054836    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:45.063728    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:45.063860    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:47.165935    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:39:47.165935    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:47.175158    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:48.930727    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:48.939340    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:48.939340    7476 machine.go:94] provisionDockerMachine start ...
	I0307 22:39:48.939340    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:50.722021    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:50.722021    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:50.731827    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:52.829439    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:39:52.829439    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:52.834894    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:39:52.835378    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:39:52.835378    7476 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 22:39:52.962971    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 22:39:52.962971    7476 buildroot.go:166] provisioning hostname "addons-723800"
	I0307 22:39:52.962971    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:54.739703    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:54.739703    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:54.739703    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:39:56.910938    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:39:56.911271    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:56.915797    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:39:56.916511    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:39:56.916511    7476 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-723800 && echo "addons-723800" | sudo tee /etc/hostname
	I0307 22:39:57.067277    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-723800
	
	I0307 22:39:57.067277    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:39:58.822014    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:39:58.831258    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:39:58.831258    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:00.918739    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:00.918739    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:00.923741    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:00.924616    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:00.924638    7476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-723800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-723800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-723800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 22:40:01.067450    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 22:40:01.067450    7476 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 22:40:01.067450    7476 buildroot.go:174] setting up certificates
	I0307 22:40:01.067450    7476 provision.go:84] configureAuth start
	I0307 22:40:01.067450    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:02.817356    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:02.817356    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:02.825501    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:04.955875    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:04.965366    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:04.965494    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:06.690648    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:06.690820    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:06.690922    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:08.796719    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:08.796938    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:08.796938    7476 provision.go:143] copyHostCerts
	I0307 22:40:08.797466    7476 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 22:40:08.799045    7476 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 22:40:08.799842    7476 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 22:40:08.801195    7476 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-723800 san=[127.0.0.1 172.20.63.241 addons-723800 localhost minikube]
	I0307 22:40:08.992777    7476 provision.go:177] copyRemoteCerts
	I0307 22:40:09.004610    7476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 22:40:09.004886    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:10.757680    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:10.757680    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:10.757680    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:12.848230    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:12.857416    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:12.857609    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:40:12.959430    7476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (3.9547837s)
	I0307 22:40:12.960103    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 22:40:13.005574    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 22:40:13.043176    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 22:40:13.086135    7476 provision.go:87] duration metric: took 12.0185752s to configureAuth
	I0307 22:40:13.086135    7476 buildroot.go:189] setting minikube options for container-runtime
	I0307 22:40:13.086756    7476 config.go:182] Loaded profile config "addons-723800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 22:40:13.086903    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:14.815777    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:14.826077    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:14.826240    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:16.931180    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:16.931180    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:16.937105    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:16.937622    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:16.937694    7476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 22:40:17.064116    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 22:40:17.064116    7476 buildroot.go:70] root file system type: tmpfs
	I0307 22:40:17.064646    7476 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 22:40:17.064891    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:18.804230    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:18.804230    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:18.804230    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:20.935524    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:20.935524    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:20.949064    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:20.949064    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:20.949064    7476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 22:40:21.104417    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 22:40:21.104645    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:22.867945    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:22.867945    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:22.868123    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:24.974312    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:24.983554    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:24.988129    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:24.988839    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:24.988839    7476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 22:40:26.057412    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 22:40:26.057958    7476 machine.go:97] duration metric: took 37.1182762s to provisionDockerMachine
	I0307 22:40:26.057958    7476 client.go:171] duration metric: took 1m38.1064505s to LocalClient.Create
	I0307 22:40:26.058001    7476 start.go:167] duration metric: took 1m38.1064942s to libmachine.API.Create "addons-723800"
	I0307 22:40:26.058127    7476 start.go:293] postStartSetup for "addons-723800" (driver="hyperv")
	I0307 22:40:26.058127    7476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 22:40:26.069858    7476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 22:40:26.070402    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:27.820594    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:27.830127    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:27.830127    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:29.965973    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:29.965973    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:29.976937    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:40:30.084317    7476 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0138378s)
	I0307 22:40:30.096462    7476 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 22:40:30.103944    7476 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 22:40:30.103944    7476 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 22:40:30.103944    7476 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 22:40:30.104577    7476 start.go:296] duration metric: took 4.0464127s for postStartSetup
	I0307 22:40:30.107485    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:31.844194    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:31.844329    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:31.844329    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:33.932069    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:33.941570    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:33.941653    7476 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\config.json ...
	I0307 22:40:33.944494    7476 start.go:128] duration metric: took 1m45.9987634s to createHost
	I0307 22:40:33.944494    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:35.719675    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:35.729755    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:35.730018    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:37.886098    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:37.886098    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:37.902349    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:37.902525    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:37.902525    7476 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 22:40:38.028610    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709851238.049609787
	
	I0307 22:40:38.028610    7476 fix.go:216] guest clock: 1709851238.049609787
	I0307 22:40:38.028610    7476 fix.go:229] Guest: 2024-03-07 22:40:38.049609787 +0000 UTC Remote: 2024-03-07 22:40:33.944494 +0000 UTC m=+111.227793101 (delta=4.105115787s)
	I0307 22:40:38.028674    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:39.790574    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:39.800068    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:39.800141    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:41.903616    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:41.912755    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:41.917941    7476 main.go:141] libmachine: Using SSH client type: native
	I0307 22:40:41.918632    7476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.63.241 22 <nil> <nil>}
	I0307 22:40:41.918632    7476 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709851238
	I0307 22:40:42.057213    7476 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 22:40:38 UTC 2024
	
	I0307 22:40:42.057213    7476 fix.go:236] clock set: Thu Mar  7 22:40:38 UTC 2024
	 (err=<nil>)
	I0307 22:40:42.057213    7476 start.go:83] releasing machines lock for "addons-723800", held for 1m54.1114079s
	I0307 22:40:42.057213    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:43.817367    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:43.826790    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:43.826845    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:45.932358    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:45.941681    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:45.946395    7476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 22:40:45.946585    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:45.954515    7476 ssh_runner.go:195] Run: cat /version.json
	I0307 22:40:45.954515    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:40:47.793563    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:47.793563    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:47.803027    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:47.833582    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:40:47.833582    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:47.833719    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:40:50.079534    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:50.084251    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:50.084251    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:40:50.106109    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:40:50.106109    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:40:50.107231    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:40:50.320802    7476 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.374367s)
	I0307 22:40:50.320857    7476 ssh_runner.go:235] Completed: cat /version.json: (4.3662467s)
	I0307 22:40:50.330740    7476 ssh_runner.go:195] Run: systemctl --version
	I0307 22:40:50.349811    7476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 22:40:50.356982    7476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 22:40:50.367774    7476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 22:40:50.390657    7476 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 22:40:50.390806    7476 start.go:494] detecting cgroup driver to use...
	I0307 22:40:50.391212    7476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:40:50.428819    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 22:40:50.458619    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 22:40:50.474548    7476 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 22:40:50.485106    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 22:40:50.511969    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:40:50.537319    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 22:40:50.564847    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:40:50.591281    7476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 22:40:50.617059    7476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 22:40:50.643541    7476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 22:40:50.668887    7476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 22:40:50.694795    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:40:50.874023    7476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 22:40:50.896537    7476 start.go:494] detecting cgroup driver to use...
	I0307 22:40:50.911629    7476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 22:40:50.942128    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 22:40:50.970946    7476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 22:40:51.005373    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 22:40:51.036667    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 22:40:51.065491    7476 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 22:40:51.123857    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 22:40:51.142160    7476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:40:51.179005    7476 ssh_runner.go:195] Run: which cri-dockerd
	I0307 22:40:51.194413    7476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 22:40:51.210652    7476 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 22:40:51.246318    7476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 22:40:51.402408    7476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 22:40:51.543679    7476 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 22:40:51.543679    7476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 22:40:51.581626    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:40:51.739328    7476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 22:40:53.226385    7476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4864168s)
	I0307 22:40:53.237312    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 22:40:53.268017    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 22:40:53.298016    7476 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 22:40:53.448307    7476 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 22:40:53.606467    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:40:53.763565    7476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 22:40:53.798897    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 22:40:53.828018    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:40:53.995643    7476 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 22:40:54.086913    7476 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 22:40:54.100446    7476 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 22:40:54.107887    7476 start.go:562] Will wait 60s for crictl version
	I0307 22:40:54.119208    7476 ssh_runner.go:195] Run: which crictl
	I0307 22:40:54.134201    7476 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 22:40:54.192062    7476 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 22:40:54.201283    7476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 22:40:54.238295    7476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 22:40:54.275075    7476 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 22:40:54.275655    7476 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 22:40:54.280097    7476 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 22:40:54.280097    7476 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 22:40:54.280097    7476 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 22:40:54.280097    7476 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 22:40:54.282737    7476 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 22:40:54.282737    7476 ip.go:210] interface addr: 172.20.48.1/20
	I0307 22:40:54.295074    7476 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 22:40:54.297192    7476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:40:54.317198    7476 kubeadm.go:877] updating cluster {Name:addons-723800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:addons-723800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.63.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 22:40:54.317729    7476 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 22:40:54.327371    7476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 22:40:54.345777    7476 docker.go:685] Got preloaded images: 
	I0307 22:40:54.347518    7476 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 22:40:54.357604    7476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 22:40:54.385559    7476 ssh_runner.go:195] Run: which lz4
	I0307 22:40:54.400366    7476 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 22:40:54.403559    7476 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 22:40:54.407132    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0307 22:40:56.716359    7476 docker.go:649] duration metric: took 2.3251563s to copy over tarball
	I0307 22:40:56.727874    7476 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 22:41:05.012446    7476 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.2844951s)
	I0307 22:41:05.012446    7476 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 22:41:05.076116    7476 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 22:41:05.094687    7476 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 22:41:05.142936    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:41:05.308629    7476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 22:41:09.397187    7476 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.0884587s)
	I0307 22:41:09.406992    7476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 22:41:09.429740    7476 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 22:41:09.429838    7476 cache_images.go:84] Images are preloaded, skipping loading
	I0307 22:41:09.429968    7476 kubeadm.go:928] updating node { 172.20.63.241 8443 v1.28.4 docker true true} ...
	I0307 22:41:09.430265    7476 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-723800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.63.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-723800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 22:41:09.438570    7476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 22:41:09.466566    7476 cni.go:84] Creating CNI manager for ""
	I0307 22:41:09.466566    7476 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:41:09.466566    7476 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 22:41:09.466566    7476 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.63.241 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-723800 NodeName:addons-723800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.63.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.63.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 22:41:09.467437    7476 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.63.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-723800"
	  kubeletExtraArgs:
	    node-ip: 172.20.63.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.63.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 22:41:09.479267    7476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 22:41:09.494547    7476 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 22:41:09.504115    7476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 22:41:09.519456    7476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 22:41:09.543943    7476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 22:41:09.570351    7476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0307 22:41:09.604627    7476 ssh_runner.go:195] Run: grep 172.20.63.241	control-plane.minikube.internal$ /etc/hosts
	I0307 22:41:09.611591    7476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:41:09.639248    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:41:09.794602    7476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:41:09.819596    7476 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800 for IP: 172.20.63.241
	I0307 22:41:09.819596    7476 certs.go:194] generating shared ca certs ...
	I0307 22:41:09.819596    7476 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:09.820122    7476 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 22:41:09.997663    7476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt ...
	I0307 22:41:09.997663    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt: {Name:mkfaab427ca81a644dd8158f14f3f807f65e8ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.003554    7476 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key ...
	I0307 22:41:10.003554    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key: {Name:mke77f92a4900f4ba92d06a20a85ddb2e967d43b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.004861    7476 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 22:41:10.552253    7476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0307 22:41:10.552253    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk06242bb3e648e29b1f160fecc7578d1c3ccbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.559470    7476 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key ...
	I0307 22:41:10.559470    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk9dbfc690f0c353aa1a789ba901364f0646dd1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.560713    7476 certs.go:256] generating profile certs ...
	I0307 22:41:10.560713    7476 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.key
	I0307 22:41:10.560713    7476 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt with IP's: []
	I0307 22:41:10.614360    7476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt ...
	I0307 22:41:10.614360    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: {Name:mkaefd21a275f7256cca90b588b8628f776f1548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.616534    7476 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.key ...
	I0307 22:41:10.616534    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.key: {Name:mkcf40fdf879f2f258769264d19271ac416b879d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.617637    7476 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key.b8742d87
	I0307 22:41:10.618677    7476 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt.b8742d87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.63.241]
	I0307 22:41:10.759617    7476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt.b8742d87 ...
	I0307 22:41:10.759617    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt.b8742d87: {Name:mka70370fcbed9f2ac902022239fc4148e841764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.764383    7476 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key.b8742d87 ...
	I0307 22:41:10.764383    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key.b8742d87: {Name:mke89d90f2b5b9cac4b0ec59e087119f82a072f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:10.765549    7476 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt.b8742d87 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt
	I0307 22:41:10.769173    7476 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key.b8742d87 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key
	I0307 22:41:10.775962    7476 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.key
	I0307 22:41:10.776882    7476 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.crt with IP's: []
	I0307 22:41:11.336832    7476 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.crt ...
	I0307 22:41:11.336832    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.crt: {Name:mk882c990c0db7b8c9f3d14b35d41328b1456374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:11.347157    7476 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.key ...
	I0307 22:41:11.347157    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.key: {Name:mk8bcf471670243fc22c36caf4c66f982d175027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:11.357740    7476 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 22:41:11.358458    7476 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 22:41:11.358592    7476 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 22:41:11.358812    7476 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 22:41:11.359022    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 22:41:11.398132    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 22:41:11.435715    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 22:41:11.477370    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 22:41:11.514567    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 22:41:11.549519    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 22:41:11.589742    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 22:41:11.628202    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 22:41:11.665746    7476 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 22:41:11.702920    7476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 22:41:11.748759    7476 ssh_runner.go:195] Run: openssl version
	I0307 22:41:11.765607    7476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 22:41:11.792412    7476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:41:11.798100    7476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:41:11.808856    7476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:41:11.829240    7476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 22:41:11.861615    7476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 22:41:11.870515    7476 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 22:41:11.871173    7476 kubeadm.go:391] StartCluster: {Name:addons-723800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-723800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.63.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:41:11.879537    7476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 22:41:11.910931    7476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 22:41:11.936902    7476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 22:41:11.961885    7476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 22:41:11.975252    7476 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 22:41:11.975252    7476 kubeadm.go:156] found existing configuration files:
	
	I0307 22:41:11.986471    7476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 22:41:11.988324    7476 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 22:41:12.012510    7476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 22:41:12.044594    7476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 22:41:12.060823    7476 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 22:41:12.071637    7476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 22:41:12.099109    7476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 22:41:12.114014    7476 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 22:41:12.125626    7476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 22:41:12.150497    7476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 22:41:12.164240    7476 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 22:41:12.177166    7476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 22:41:12.192777    7476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 22:41:12.418169    7476 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 22:41:23.774237    7476 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 22:41:23.774319    7476 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 22:41:23.774319    7476 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 22:41:23.774319    7476 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 22:41:23.774923    7476 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 22:41:23.774923    7476 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 22:41:23.777407    7476 out.go:204]   - Generating certificates and keys ...
	I0307 22:41:23.777407    7476 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 22:41:23.777407    7476 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 22:41:23.778074    7476 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 22:41:23.778196    7476 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 22:41:23.778483    7476 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 22:41:23.778536    7476 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 22:41:23.778536    7476 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 22:41:23.778536    7476 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-723800 localhost] and IPs [172.20.63.241 127.0.0.1 ::1]
	I0307 22:41:23.779208    7476 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 22:41:23.779208    7476 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-723800 localhost] and IPs [172.20.63.241 127.0.0.1 ::1]
	I0307 22:41:23.779208    7476 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 22:41:23.779820    7476 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 22:41:23.779916    7476 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 22:41:23.779916    7476 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 22:41:23.779916    7476 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 22:41:23.779916    7476 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 22:41:23.780442    7476 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 22:41:23.780583    7476 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 22:41:23.780641    7476 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 22:41:23.780641    7476 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 22:41:23.783735    7476 out.go:204]   - Booting up control plane ...
	I0307 22:41:23.783735    7476 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 22:41:23.783735    7476 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 22:41:23.784362    7476 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 22:41:23.784362    7476 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 22:41:23.784362    7476 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 22:41:23.785046    7476 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 22:41:23.785526    7476 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 22:41:23.785526    7476 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.004704 seconds
	I0307 22:41:23.785526    7476 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 22:41:23.786181    7476 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 22:41:23.786322    7476 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 22:41:23.786526    7476 kubeadm.go:309] [mark-control-plane] Marking the node addons-723800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 22:41:23.786526    7476 kubeadm.go:309] [bootstrap-token] Using token: som4ke.j25m1ducmisol8e0
	I0307 22:41:23.788921    7476 out.go:204]   - Configuring RBAC rules ...
	I0307 22:41:23.788921    7476 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 22:41:23.789676    7476 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 22:41:23.789676    7476 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 22:41:23.790256    7476 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 22:41:23.790311    7476 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 22:41:23.790311    7476 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 22:41:23.791088    7476 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 22:41:23.791265    7476 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 22:41:23.791265    7476 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 22:41:23.791265    7476 kubeadm.go:309] 
	I0307 22:41:23.791265    7476 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 22:41:23.791265    7476 kubeadm.go:309] 
	I0307 22:41:23.791265    7476 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 22:41:23.791838    7476 kubeadm.go:309] 
	I0307 22:41:23.791838    7476 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 22:41:23.791838    7476 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 22:41:23.791838    7476 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 22:41:23.791838    7476 kubeadm.go:309] 
	I0307 22:41:23.792377    7476 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 22:41:23.792445    7476 kubeadm.go:309] 
	I0307 22:41:23.792619    7476 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 22:41:23.792619    7476 kubeadm.go:309] 
	I0307 22:41:23.792619    7476 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 22:41:23.792619    7476 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 22:41:23.793238    7476 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 22:41:23.793238    7476 kubeadm.go:309] 
	I0307 22:41:23.793238    7476 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 22:41:23.793238    7476 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 22:41:23.793238    7476 kubeadm.go:309] 
	I0307 22:41:23.793806    7476 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token som4ke.j25m1ducmisol8e0 \
	I0307 22:41:23.794051    7476 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0307 22:41:23.794051    7476 kubeadm.go:309] 	--control-plane 
	I0307 22:41:23.794051    7476 kubeadm.go:309] 
	I0307 22:41:23.794051    7476 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 22:41:23.794051    7476 kubeadm.go:309] 
	I0307 22:41:23.794051    7476 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token som4ke.j25m1ducmisol8e0 \
	I0307 22:41:23.794051    7476 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0307 22:41:23.794051    7476 cni.go:84] Creating CNI manager for ""
	I0307 22:41:23.794051    7476 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:41:23.797361    7476 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 22:41:23.811079    7476 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 22:41:23.832105    7476 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 22:41:23.896835    7476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 22:41:23.911062    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:23.912346    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-723800 minikube.k8s.io/updated_at=2024_03_07T22_41_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=addons-723800 minikube.k8s.io/primary=true
	I0307 22:41:23.924671    7476 ops.go:34] apiserver oom_adj: -16
	I0307 22:41:24.212049    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:24.726732    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:25.214440    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:25.715175    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:26.216176    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:26.721164    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:27.219820    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:27.721272    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:28.217625    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:28.725371    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:29.218392    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:29.714082    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:30.221901    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:30.720167    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:31.226128    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:31.714622    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:32.226777    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:32.717077    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:33.220276    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:33.718440    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:34.223411    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:34.718219    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:35.217331    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:35.728988    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:36.224729    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:36.728436    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:37.228995    7476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 22:41:37.336414    7476 kubeadm.go:1106] duration metric: took 13.439455s to wait for elevateKubeSystemPrivileges
	W0307 22:41:37.336554    7476 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 22:41:37.336554    7476 kubeadm.go:393] duration metric: took 25.4652215s to StartCluster
	I0307 22:41:37.336694    7476 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:37.336850    7476 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:41:37.337742    7476 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:41:37.338767    7476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 22:41:37.339373    7476 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.63.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 22:41:37.342830    7476 out.go:177] * Verifying Kubernetes components...
	I0307 22:41:37.339485    7476 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 22:41:37.339485    7476 config.go:182] Loaded profile config "addons-723800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 22:41:37.342864    7476 addons.go:69] Setting yakd=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting helm-tiller=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting ingress=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting ingress-dns=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting gcp-auth=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting inspektor-gadget=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting metrics-server=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting cloud-spanner=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting registry=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting volumesnapshots=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting storage-provisioner=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting default-storageclass=true in profile "addons-723800"
	I0307 22:41:37.342864    7476 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-723800"
	I0307 22:41:37.345887    7476 addons.go:234] Setting addon helm-tiller=true in "addons-723800"
	I0307 22:41:37.345887    7476 addons.go:234] Setting addon ingress=true in "addons-723800"
	I0307 22:41:37.346076    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346076    7476 addons.go:234] Setting addon ingress-dns=true in "addons-723800"
	I0307 22:41:37.346076    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346250    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346291    7476 addons.go:234] Setting addon volumesnapshots=true in "addons-723800"
	I0307 22:41:37.346411    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346477    7476 addons.go:234] Setting addon inspektor-gadget=true in "addons-723800"
	I0307 22:41:37.346625    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346774    7476 addons.go:234] Setting addon yakd=true in "addons-723800"
	I0307 22:41:37.346250    7476 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-723800"
	I0307 22:41:37.347134    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346076    7476 addons.go:234] Setting addon cloud-spanner=true in "addons-723800"
	I0307 22:41:37.346291    7476 addons.go:234] Setting addon registry=true in "addons-723800"
	I0307 22:41:37.347349    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.347454    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346891    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.346891    7476 addons.go:234] Setting addon metrics-server=true in "addons-723800"
	I0307 22:41:37.347733    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.347875    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.346980    7476 addons.go:234] Setting addon storage-provisioner=true in "addons-723800"
	I0307 22:41:37.346980    7476 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-723800"
	I0307 22:41:37.348003    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.345887    7476 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-723800"
	I0307 22:41:37.346980    7476 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-723800"
	I0307 22:41:37.348635    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:37.347253    7476 mustload.go:65] Loading cluster: addons-723800
	I0307 22:41:37.347584    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.349480    7476 config.go:182] Loaded profile config "addons-723800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 22:41:37.349601    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.352103    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.352339    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.353060    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.353746    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.353746    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.354355    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.354410    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.356083    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.357129    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.357179    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.360270    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.360961    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:37.364243    7476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:41:38.166521    7476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 22:41:38.356799    7476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:41:42.910423    7476 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.7438585s)
	I0307 22:41:42.910423    7476 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0307 22:41:42.912341    7476 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.5554993s)
	I0307 22:41:42.920555    7476 node_ready.go:35] waiting up to 6m0s for node "addons-723800" to be "Ready" ...
	I0307 22:41:42.965596    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:42.965596    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:42.971818    7476 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 22:41:42.974432    7476 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 22:41:42.981968    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 22:41:42.988532    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.012672    7476 node_ready.go:49] node "addons-723800" has status "Ready":"True"
	I0307 22:41:43.012672    7476 node_ready.go:38] duration metric: took 92.1165ms for node "addons-723800" to be "Ready" ...
	I0307 22:41:43.012672    7476 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:41:43.037140    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.037140    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.057186    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 22:41:43.083585    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 22:41:43.087438    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 22:41:43.099575    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 22:41:43.099575    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 22:41:43.099575    7476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace to be "Ready" ...
	I0307 22:41:43.139550    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 22:41:43.158673    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 22:41:43.165779    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 22:41:43.169035    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 22:41:43.169035    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 22:41:43.169035    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.318049    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.318049    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.327669    7476 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 22:41:43.318049    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.327669    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.334055    7476 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 22:41:43.340560    7476 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 22:41:43.340560    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 22:41:43.340560    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.338721    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.349711    7476 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 22:41:43.342820    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.363529    7476 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 22:41:43.366082    7476 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 22:41:43.366082    7476 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:41:43.382502    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 22:41:43.382502    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.384945    7476 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 22:41:43.384945    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 22:41:43.385112    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.391520    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.391520    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.396340    7476 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 22:41:43.398465    7476 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 22:41:43.398465    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 22:41:43.398465    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.516144    7476 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-723800" context rescaled to 1 replicas
	I0307 22:41:43.713248    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.713248    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.716915    7476 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 22:41:43.716915    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.724288    7476 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 22:41:43.721548    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.731779    7476 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 22:41:43.731779    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 22:41:43.731779    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.734332    7476 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-723800"
	I0307 22:41:43.734948    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:43.736545    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.740851    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.740851    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.746330    7476 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 22:41:43.749471    7476 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 22:41:43.748219    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:43.749615    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:43.753050    7476 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 22:41:43.750094    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 22:41:43.753472    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:43.769011    7476 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 22:41:43.769011    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 22:41:43.769011    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:44.148156    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:44.148156    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:44.148156    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:44.166076    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:44.166076    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:44.168902    7476 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 22:41:44.170926    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 22:41:44.170926    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 22:41:44.170926    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:44.201034    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:44.201034    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:44.208256    7476 addons.go:234] Setting addon default-storageclass=true in "addons-723800"
	I0307 22:41:44.208256    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:41:44.208256    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:44.233137    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:44.233137    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:44.243146    7476 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 22:41:44.245586    7476 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 22:41:44.245586    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 22:41:44.245586    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:44.432462    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:44.432462    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:44.437511    7476 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0307 22:41:44.458672    7476 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0307 22:41:44.458672    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0307 22:41:44.458672    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:45.157852    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:47.174948    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:48.511317    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:48.511317    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:48.513957    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:48.618177    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:48.618177    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:48.618177    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:48.644817    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:48.651643    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:48.651713    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:48.876366    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:48.876366    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:48.876366    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:48.902430    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:48.902430    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:48.902430    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:49.020894    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:49.020894    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:49.020894    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:49.153576    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:49.153688    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:49.158130    7476 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 22:41:49.174178    7476 out.go:177]   - Using image docker.io/busybox:stable
	I0307 22:41:49.178989    7476 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 22:41:49.178989    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 22:41:49.178989    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:49.260696    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:49.417579    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:49.417579    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:49.417579    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:49.452127    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:49.454218    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:49.454375    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:49.822519    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:49.822519    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:49.822519    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:50.209128    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:50.209128    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:50.209128    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:50.439464    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:50.439517    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:50.439622    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:50.505856    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:50.505856    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:50.506130    7476 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 22:41:50.506175    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 22:41:50.506268    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:51.133320    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:51.133320    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:51.133320    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:51.612028    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:52.042509    7476 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 22:41:52.042509    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:41:53.637510    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:54.455541    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:54.455541    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:54.455541    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:54.908572    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:54.908572    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:54.909965    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:54.974199    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:54.974199    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:54.974199    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.105782    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.105782    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.105782    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.231623    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.231676    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.232693    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.337449    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.337449    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.337886    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.371010    7476 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 22:41:55.371010    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 22:41:55.377777    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 22:41:55.377829    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 22:41:55.468615    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.468672    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.469687    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.564473    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.564704    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.566061    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.576911    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 22:41:55.577071    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 22:41:55.602048    7476 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 22:41:55.602048    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 22:41:55.630397    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.630397    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.631896    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.647737    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 22:41:55.696025    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:55.696025    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.697343    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:55.736309    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 22:41:55.736378    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 22:41:55.742440    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:55.746766    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:55.746882    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:55.769702    7476 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 22:41:55.769702    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 22:41:55.786895    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 22:41:55.835514    7476 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 22:41:55.835569    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 22:41:55.904184    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 22:41:55.904275    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 22:41:55.913790    7476 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 22:41:55.913790    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 22:41:55.948579    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:41:55.988375    7476 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 22:41:55.988435    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 22:41:56.103799    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 22:41:56.103799    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 22:41:56.116954    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 22:41:56.124922    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:56.169928    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 22:41:56.184285    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 22:41:56.205823    7476 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 22:41:56.205823    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 22:41:56.235373    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 22:41:56.254077    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:56.262934    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:56.262934    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:56.330183    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:56.330183    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:56.331141    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:56.393645    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:56.393645    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:56.394799    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:56.414846    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 22:41:56.414902    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 22:41:56.416015    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:41:56.416700    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:56.416772    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:41:56.442550    7476 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 22:41:56.442550    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 22:41:56.616977    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 22:41:56.617086    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 22:41:56.685482    7476 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:41:56.685598    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 22:41:56.761350    7476 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 22:41:56.761420    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 22:41:56.900560    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 22:41:56.900560    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 22:41:56.961081    7476 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 22:41:56.961193    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 22:41:57.051641    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:41:57.080517    7476 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 22:41:57.080517    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 22:41:57.154875    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 22:41:57.154875    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 22:41:57.225794    7476 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0307 22:41:57.225794    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0307 22:41:57.269494    7476 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 22:41:57.269494    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 22:41:57.313245    7476 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 22:41:57.313245    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 22:41:57.315407    7476 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 22:41:57.315407    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 22:41:57.429795    7476 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 22:41:57.429897    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 22:41:57.468362    7476 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 22:41:57.468362    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 22:41:57.469261    7476 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0307 22:41:57.469261    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0307 22:41:57.471441    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 22:41:57.641234    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:57.643556    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:57.643726    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:57.658848    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0307 22:41:57.664889    7476 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 22:41:57.664889    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 22:41:57.672476    7476 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 22:41:57.672476    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 22:41:57.816491    7476 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 22:41:57.816491    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 22:41:57.894204    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 22:41:57.986678    7476 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 22:41:57.997827    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 22:41:58.129095    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:41:58.198577    7476 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 22:41:58.198577    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 22:41:58.207594    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 22:41:58.485444    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:58.497258    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:58.497558    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:58.622782    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 22:41:58.658469    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.0107041s)
	I0307 22:41:59.004058    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:41:59.004058    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:41:59.004058    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:41:59.369997    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:41:59.894590    7476 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 22:42:00.132455    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:00.196376    7476 addons.go:234] Setting addon gcp-auth=true in "addons-723800"
	I0307 22:42:00.196376    7476 host.go:66] Checking if "addons-723800" exists ...
	I0307 22:42:00.197765    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:42:02.268842    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:42:02.268842    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:42:02.287336    7476 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 22:42:02.287336    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-723800 ).state
	I0307 22:42:02.628441    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:04.646262    7476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:42:04.646262    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:42:04.651279    7476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-723800 ).networkadapters[0]).ipaddresses[0]
	I0307 22:42:04.735036    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.1284266s)
	I0307 22:42:06.915424    7476 addons.go:470] Verifying addon ingress=true in "addons-723800"
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.9667447s)
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.7983712s)
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.7453361s)
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.7310408s)
	I0307 22:42:06.915424    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.679953s)
	I0307 22:42:06.916047    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.8641545s)
	I0307 22:42:06.921329    7476 out.go:177] * Verifying ingress addon...
	I0307 22:42:06.925402    7476 addons.go:470] Verifying addon registry=true in "addons-723800"
	I0307 22:42:06.925402    7476 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-723800 service yakd-dashboard -n yakd-dashboard
	
	I0307 22:42:06.925402    7476 addons.go:470] Verifying addon metrics-server=true in "addons-723800"
	I0307 22:42:06.931697    7476 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 22:42:06.931697    7476 out.go:177] * Verifying registry addon...
	I0307 22:42:06.943756    7476 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0307 22:42:07.032872    7476 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 22:42:07.032872    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:07.071065    7476 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 22:42:07.071065    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:07.154906    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:07.155741    7476 main.go:141] libmachine: [stdout =====>] : 172.20.63.241
	
	I0307 22:42:07.163663    7476 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:42:07.164300    7476 sshutil.go:53] new ssh client: &{IP:172.20.63.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\addons-723800\id_rsa Username:docker}
	I0307 22:42:07.452459    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:07.476155    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:07.954879    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:07.954879    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:08.491858    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:08.496699    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:08.724852    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.0659022s)
	I0307 22:42:08.724852    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.8305487s)
	W0307 22:42:08.725386    7476 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 22:42:08.725511    7476 retry.go:31] will retry after 340.303555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 22:42:08.725511    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.5178205s)
	I0307 22:42:08.725511    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.1026368s)
	I0307 22:42:08.725511    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.3554289s)
	I0307 22:42:08.725511    7476 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.4381161s)
	I0307 22:42:08.727792    7476 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 22:42:08.729212    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.2575831s)
	I0307 22:42:08.730758    7476 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-723800"
	I0307 22:42:08.730758    7476 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 22:42:08.734398    7476 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 22:42:08.734185    7476 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 22:42:08.734398    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 22:42:08.737643    7476 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 22:42:08.794101    7476 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 22:42:08.794101    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 22:42:08.807096    7476 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 22:42:08.807096    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0307 22:42:08.821634    7476 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0307 22:42:08.893181    7476 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 22:42:08.893181    7476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 22:42:08.946947    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:08.948811    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 22:42:08.956970    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:09.078007    7476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 22:42:09.262254    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:09.489707    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:09.489707    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:09.665797    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:09.759940    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:09.958832    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:09.963534    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:10.254179    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:10.446492    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:10.450075    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:10.763877    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:10.963221    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:10.967410    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:11.253490    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:11.472771    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:11.493105    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.5442709s)
	I0307 22:42:11.493214    7476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.4151848s)
	I0307 22:42:11.504163    7476 addons.go:470] Verifying addon gcp-auth=true in "addons-723800"
	I0307 22:42:11.507198    7476 out.go:177] * Verifying gcp-auth addon...
	I0307 22:42:11.507198    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:11.513278    7476 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 22:42:11.537165    7476 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 22:42:11.537165    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:12.050815    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:12.051203    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:12.055901    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:12.055901    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:12.326023    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:12.339300    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:12.449340    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:12.455231    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:12.529687    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:12.758885    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:12.951920    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:12.953517    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:13.021452    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:13.262610    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:13.639255    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:13.641310    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:13.642948    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:13.752679    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:13.957342    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:13.958522    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:14.031526    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:14.259890    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:14.447122    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:14.451259    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:14.529243    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:14.621545    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:14.777153    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:14.961121    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:14.961391    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:15.039581    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:15.253929    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:15.466984    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:15.474687    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:15.525701    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:15.754651    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:15.954242    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:15.960712    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:16.029694    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:16.255295    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:16.450746    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:16.454135    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:16.522633    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:16.635252    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:16.752992    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:16.945109    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:16.950741    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:17.024263    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:17.258715    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:17.452710    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:17.452710    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:17.524330    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:17.754981    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:17.955890    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:17.957717    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:18.032234    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:18.247144    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:18.450645    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:18.451760    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:18.529503    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:18.753202    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:18.942539    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:18.959751    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:19.033851    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:19.121918    7476 pod_ready.go:102] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"False"
	I0307 22:42:19.258087    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:19.454434    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:19.454434    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:19.528305    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:19.756042    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:19.950263    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:19.963009    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:20.028805    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:20.255006    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:20.461840    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:20.462555    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:20.531500    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:20.760524    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:20.944397    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:20.950094    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:21.027035    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:21.128807    7476 pod_ready.go:92] pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.128807    7476 pod_ready.go:81] duration metric: took 38.0184758s for pod "coredns-5dd5756b68-8j9qj" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.128807    7476 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kz6mv" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.131743    7476 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kz6mv" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kz6mv" not found
	I0307 22:42:21.131790    7476 pod_ready.go:81] duration metric: took 2.9823ms for pod "coredns-5dd5756b68-kz6mv" in "kube-system" namespace to be "Ready" ...
	E0307 22:42:21.131836    7476 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kz6mv" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kz6mv" not found
	I0307 22:42:21.131836    7476 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.144285    7476 pod_ready.go:92] pod "etcd-addons-723800" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.144285    7476 pod_ready.go:81] duration metric: took 12.4483ms for pod "etcd-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.144285    7476 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.153285    7476 pod_ready.go:92] pod "kube-apiserver-addons-723800" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.153285    7476 pod_ready.go:81] duration metric: took 9.0002ms for pod "kube-apiserver-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.153818    7476 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.161109    7476 pod_ready.go:92] pod "kube-controller-manager-addons-723800" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.161109    7476 pod_ready.go:81] duration metric: took 7.2911ms for pod "kube-controller-manager-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.161109    7476 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qs82f" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.255875    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:21.335792    7476 pod_ready.go:92] pod "kube-proxy-qs82f" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.335888    7476 pod_ready.go:81] duration metric: took 174.7779ms for pod "kube-proxy-qs82f" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.335888    7476 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.441326    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:21.459197    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:21.527562    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:21.731932    7476 pod_ready.go:92] pod "kube-scheduler-addons-723800" in "kube-system" namespace has status "Ready":"True"
	I0307 22:42:21.731986    7476 pod_ready.go:81] duration metric: took 396.0938ms for pod "kube-scheduler-addons-723800" in "kube-system" namespace to be "Ready" ...
	I0307 22:42:21.732052    7476 pod_ready.go:38] duration metric: took 38.7190238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:42:21.732105    7476 api_server.go:52] waiting for apiserver process to appear ...
	I0307 22:42:21.744653    7476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:42:21.750097    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:21.775325    7476 api_server.go:72] duration metric: took 44.4354785s to wait for apiserver process to appear ...
	I0307 22:42:21.775325    7476 api_server.go:88] waiting for apiserver healthz status ...
	I0307 22:42:21.775426    7476 api_server.go:253] Checking apiserver healthz at https://172.20.63.241:8443/healthz ...
	I0307 22:42:21.782292    7476 api_server.go:279] https://172.20.63.241:8443/healthz returned 200:
	ok
	I0307 22:42:21.783975    7476 api_server.go:141] control plane version: v1.28.4
	I0307 22:42:21.783975    7476 api_server.go:131] duration metric: took 8.6503ms to wait for apiserver health ...
	I0307 22:42:21.783975    7476 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 22:42:21.930710    7476 system_pods.go:59] 18 kube-system pods found
	I0307 22:42:21.930710    7476 system_pods.go:61] "coredns-5dd5756b68-8j9qj" [38e56d9d-339a-4d12-b9ac-f11bb0e03ad4] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "csi-hostpath-attacher-0" [a8b4d241-b11e-43cc-aaa5-5ac18f92a99d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 22:42:21.930710    7476 system_pods.go:61] "csi-hostpath-resizer-0" [23cee7c8-f0dd-417f-8f48-a32d59d18cd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 22:42:21.930710    7476 system_pods.go:61] "csi-hostpathplugin-v2wbg" [31f05d78-dcf5-4dca-b4ee-0d10224c4997] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 22:42:21.930710    7476 system_pods.go:61] "etcd-addons-723800" [b8eb6807-6fe6-4215-9304-5eb71647e84f] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "kube-apiserver-addons-723800" [050ec3bc-7712-40ca-a52c-72f280b628f9] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "kube-controller-manager-addons-723800" [741dd06e-9ef0-4fda-abfa-4e5fd6572ab8] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "kube-ingress-dns-minikube" [0ea4afaa-e10d-462b-9036-42345f655462] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 22:42:21.930710    7476 system_pods.go:61] "kube-proxy-qs82f" [ee3f515f-4a59-4bf1-9220-576fee19a13d] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "kube-scheduler-addons-723800" [4c252d46-f39a-44b1-bd60-e02ef8d5f989] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "metrics-server-69cf46c98-5572t" [75d7cf2c-199d-4d5f-8105-f7b1bdb0812d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 22:42:21.930710    7476 system_pods.go:61] "nvidia-device-plugin-daemonset-wthv5" [143a4a10-8313-40ab-a7f6-613f980a9728] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 22:42:21.930710    7476 system_pods.go:61] "registry-proxy-gggz5" [f077c346-86c0-44cb-93fa-374ceed6f5c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 22:42:21.930710    7476 system_pods.go:61] "registry-vq7gp" [15283e17-0641-48c8-bed6-7b74b3939a32] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 22:42:21.930710    7476 system_pods.go:61] "snapshot-controller-58dbcc7b99-7d5jj" [d0b5c16d-10bb-4223-8857-210019ce726f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 22:42:21.930710    7476 system_pods.go:61] "snapshot-controller-58dbcc7b99-slp74" [96f0d7e4-91f3-4bea-ba12-f33b4ec03aec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 22:42:21.930710    7476 system_pods.go:61] "storage-provisioner" [5f9aed8e-67b2-47fb-9d13-d2d67f48234d] Running
	I0307 22:42:21.930710    7476 system_pods.go:61] "tiller-deploy-7b677967b9-k7wdr" [bd5da928-4e4f-4fc2-8cea-2a5c1602c03e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0307 22:42:21.930710    7476 system_pods.go:74] duration metric: took 146.7333ms to wait for pod list to return data ...
	I0307 22:42:21.930710    7476 default_sa.go:34] waiting for default service account to be created ...
	I0307 22:42:21.942837    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:21.962803    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:22.022147    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:22.131098    7476 default_sa.go:45] found service account: "default"
	I0307 22:42:22.131199    7476 default_sa.go:55] duration metric: took 200.4872ms for default service account to be created ...
	I0307 22:42:22.131227    7476 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 22:42:22.256786    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:22.333097    7476 system_pods.go:86] 18 kube-system pods found
	I0307 22:42:22.333097    7476 system_pods.go:89] "coredns-5dd5756b68-8j9qj" [38e56d9d-339a-4d12-b9ac-f11bb0e03ad4] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "csi-hostpath-attacher-0" [a8b4d241-b11e-43cc-aaa5-5ac18f92a99d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0307 22:42:22.333097    7476 system_pods.go:89] "csi-hostpath-resizer-0" [23cee7c8-f0dd-417f-8f48-a32d59d18cd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0307 22:42:22.333097    7476 system_pods.go:89] "csi-hostpathplugin-v2wbg" [31f05d78-dcf5-4dca-b4ee-0d10224c4997] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 22:42:22.333097    7476 system_pods.go:89] "etcd-addons-723800" [b8eb6807-6fe6-4215-9304-5eb71647e84f] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "kube-apiserver-addons-723800" [050ec3bc-7712-40ca-a52c-72f280b628f9] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "kube-controller-manager-addons-723800" [741dd06e-9ef0-4fda-abfa-4e5fd6572ab8] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "kube-ingress-dns-minikube" [0ea4afaa-e10d-462b-9036-42345f655462] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 22:42:22.333097    7476 system_pods.go:89] "kube-proxy-qs82f" [ee3f515f-4a59-4bf1-9220-576fee19a13d] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "kube-scheduler-addons-723800" [4c252d46-f39a-44b1-bd60-e02ef8d5f989] Running
	I0307 22:42:22.333097    7476 system_pods.go:89] "metrics-server-69cf46c98-5572t" [75d7cf2c-199d-4d5f-8105-f7b1bdb0812d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 22:42:22.333097    7476 system_pods.go:89] "nvidia-device-plugin-daemonset-wthv5" [143a4a10-8313-40ab-a7f6-613f980a9728] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0307 22:42:22.333097    7476 system_pods.go:89] "registry-proxy-gggz5" [f077c346-86c0-44cb-93fa-374ceed6f5c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0307 22:42:22.333097    7476 system_pods.go:89] "registry-vq7gp" [15283e17-0641-48c8-bed6-7b74b3939a32] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0307 22:42:22.333097    7476 system_pods.go:89] "snapshot-controller-58dbcc7b99-7d5jj" [d0b5c16d-10bb-4223-8857-210019ce726f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 22:42:22.333097    7476 system_pods.go:89] "snapshot-controller-58dbcc7b99-slp74" [96f0d7e4-91f3-4bea-ba12-f33b4ec03aec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0307 22:42:22.333097    7476 system_pods.go:89] "storage-provisioner" [5f9aed8e-67b2-47fb-9d13-d2d67f48234d] Running
	I0307 22:42:22.333633    7476 system_pods.go:89] "tiller-deploy-7b677967b9-k7wdr" [bd5da928-4e4f-4fc2-8cea-2a5c1602c03e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0307 22:42:22.333633    7476 system_pods.go:126] duration metric: took 202.4045ms to wait for k8s-apps to be running ...
	I0307 22:42:22.333713    7476 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 22:42:22.345533    7476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:42:22.368143    7476 system_svc.go:56] duration metric: took 34.467ms WaitForService to wait for kubelet
	I0307 22:42:22.368177    7476 kubeadm.go:576] duration metric: took 45.0283252s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 22:42:22.368287    7476 node_conditions.go:102] verifying NodePressure condition ...
	I0307 22:42:22.459201    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:22.460091    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:22.530298    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:22.530298    7476 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 22:42:22.530298    7476 node_conditions.go:123] node cpu capacity is 2
	I0307 22:42:22.530298    7476 node_conditions.go:105] duration metric: took 162.0099ms to run NodePressure ...
	I0307 22:42:22.530298    7476 start.go:240] waiting for startup goroutines ...
	I0307 22:42:22.750810    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:22.957206    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:22.957206    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:23.029818    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:23.261662    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:23.450752    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:23.455544    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:23.529237    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:23.757746    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:23.948194    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:23.952295    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:24.026270    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:24.259745    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:24.459932    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:24.459932    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:24.523669    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:24.762525    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:24.943809    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:24.958870    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:25.033266    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:25.254656    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:25.457306    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:25.458012    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:25.531915    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:25.762757    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:25.949790    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:25.954870    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:26.037155    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:26.250693    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:27.623748    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:27.630535    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:27.630535    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:27.631769    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:28.191962    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:28.198961    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:28.203900    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:28.206454    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:28.206949    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:28.214040    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:28.214891    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:28.215748    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:28.255605    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:28.470927    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:28.471841    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:28.537141    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:28.766200    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:28.956042    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:28.963063    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:29.027388    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:29.266257    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:29.444244    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:29.450608    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:29.528369    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:29.756106    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:29.943692    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:29.956644    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:30.031360    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:30.242673    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:30.441869    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:30.466628    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:30.527354    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:30.752415    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:30.956523    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:30.956523    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:31.020755    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:31.248172    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:31.437732    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:31.457813    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:31.517195    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:31.744820    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:31.952250    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:31.958036    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:32.034780    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:32.244880    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:32.452685    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:32.454159    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:32.525826    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:32.741098    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:32.947141    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:32.947141    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:33.013849    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:33.250457    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:33.442553    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:33.459014    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:33.531891    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:33.755514    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:33.951087    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:33.951316    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:34.028797    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:34.253695    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:34.441543    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:34.460704    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:34.524913    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:34.752536    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:34.949289    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:34.953632    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:35.032380    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:35.253340    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:35.441683    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:35.461763    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:35.517257    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:35.744617    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:35.954850    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:35.961081    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:36.014982    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:36.249735    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:36.452921    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:36.452921    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:36.515767    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:36.753214    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:36.946298    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:36.946298    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:37.020937    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:37.243609    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:37.440313    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:37.465630    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:37.523030    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:37.757787    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:37.953934    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:37.954429    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:38.032764    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:38.261813    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:38.443727    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:38.458446    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:38.534654    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:38.751030    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:38.946538    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:38.950819    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:39.018116    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:39.262163    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:39.453897    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:39.454608    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:39.535600    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:39.754908    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:39.957296    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:39.957947    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:40.018695    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:40.255123    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:40.453288    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:40.453959    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:40.524544    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:40.761013    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:40.962271    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:40.963138    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:41.017632    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:41.251757    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:41.456258    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:41.457061    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:41.536305    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:41.756998    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:41.947924    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:41.952820    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:42.037921    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:42.262863    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:42.452231    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:42.455277    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:42.536119    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:42.761905    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:42.952090    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:42.955375    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:43.022137    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:43.249762    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:43.453744    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:43.457313    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:43.738474    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:43.746669    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:44.809723    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:44.809723    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:44.814620    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:44.816752    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:45.557618    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:45.560350    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:45.561097    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:45.568157    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:45.570478    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:45.572897    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:45.572897    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:45.579078    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:45.749400    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:45.967156    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:45.967198    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:46.025319    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:46.255278    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:46.448345    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:46.458968    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:46.567980    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:46.761323    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:46.950480    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:46.950749    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:47.035931    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:47.263392    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:47.449532    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:47.465519    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:47.522673    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:47.752353    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:47.945015    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:47.945015    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:48.028010    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:48.258116    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:48.442507    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:48.461272    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:48.532873    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:48.763864    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:48.951759    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:48.954102    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:49.028319    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:49.255164    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:49.456610    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:49.457226    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:49.524572    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:49.758835    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:49.954751    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:49.955370    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:50.035628    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:50.244417    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:50.457786    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:50.460387    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:50.536408    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:50.762595    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:50.952752    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:50.952997    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:51.033741    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:51.254808    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:51.442607    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:51.467742    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:51.527273    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:51.748046    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:51.952905    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:51.953586    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:52.032389    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:52.255645    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:52.456715    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:52.456715    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:52.531999    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:52.754775    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:52.958260    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:52.958260    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:53.021050    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:53.259241    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:53.453950    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:53.455183    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:53.524022    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:53.765890    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:53.946020    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:53.949146    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:54.025679    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:54.255304    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:54.453951    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:54.455819    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:54.531709    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:54.760368    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:54.955643    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:54.958607    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:55.037901    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:55.400519    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:55.453846    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:55.453846    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:55.523205    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:55.945605    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:55.956230    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:55.959229    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:56.494350    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:56.494403    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:56.495196    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:56.495372    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:56.531584    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:56.843905    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:56.950732    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:56.968228    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:57.021191    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:57.258375    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:57.458978    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:57.459676    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:57.531816    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:57.753656    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:57.955998    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:57.961629    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:58.039212    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:58.257707    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:58.443336    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:58.465434    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:58.519143    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:58.762044    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:59.215153    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:59.215153    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:59.215153    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:59.246265    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:59.466735    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 22:42:59.467767    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:42:59.530266    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:42:59.746678    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:42:59.951680    7476 kapi.go:107] duration metric: took 53.0074357s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 22:42:59.951877    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:00.047992    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:00.258855    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:00.475283    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:00.535194    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:00.754248    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:00.969103    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:01.020316    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:01.249689    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:01.457141    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:01.538192    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:01.747717    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:01.948325    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:02.030409    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:02.254325    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:02.457919    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:02.521352    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:02.751808    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:02.947008    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:03.018939    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:03.246355    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:03.441429    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:03.543833    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:03.751394    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:03.951384    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:04.033420    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:04.248754    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:04.456469    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:04.528370    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:04.752448    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:04.951452    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:05.025708    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:05.258267    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:05.448180    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:05.527905    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:05.745927    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:05.949552    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:06.341153    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:06.341325    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:06.454081    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:06.526129    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:07.102460    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:07.103564    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:07.106998    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:07.790824    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:07.794759    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:07.795094    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:07.798427    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:08.151175    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:08.151898    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:08.251884    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:08.454453    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:08.523274    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:08.755039    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:08.963964    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:09.025195    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:09.247911    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:09.459611    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:09.532025    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:09.754077    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:09.944023    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:10.030095    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:10.253664    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:10.442099    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:10.518555    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:10.765694    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:10.954859    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:11.017960    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:11.261977    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:11.452075    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:11.525114    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:12.156810    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:12.157898    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:12.157898    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:12.644801    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:12.644801    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:12.645268    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:12.761146    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:12.948999    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:13.032606    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:13.257795    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:13.456667    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:13.521913    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:13.747450    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:13.959214    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:14.032275    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:14.254027    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:14.444218    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:14.528106    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:14.757506    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:14.957751    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:15.031847    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:15.258637    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:15.443632    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:15.521475    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:15.748834    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:15.942671    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:16.022702    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:16.253981    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:16.455576    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:16.891772    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:16.897418    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:17.196083    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:17.204279    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:17.245806    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:17.457679    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:17.529744    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:17.751183    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:17.945758    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:18.046941    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:18.256741    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:18.474255    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:18.523058    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:18.747808    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:18.953210    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:19.017983    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:19.294422    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:19.450267    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:19.523523    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:19.772260    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:19.957932    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:20.032765    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:20.253360    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:20.456132    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:20.535452    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:20.763336    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:20.944026    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:21.026503    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:21.260589    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:21.460739    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:21.532592    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:21.749295    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:21.943907    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:22.192142    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:22.644248    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:22.645243    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:22.645243    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:23.093182    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:23.096993    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:23.099863    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:23.257328    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:23.444715    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:23.532445    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:23.758687    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:23.946885    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:24.018953    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:24.256039    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:24.457572    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:24.533288    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:24.752410    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:24.957011    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:25.018900    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:25.251930    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:25.459134    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:25.529499    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:25.758626    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:26.134926    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:26.139844    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:26.791336    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:26.791971    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:26.792739    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:26.801504    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:26.950048    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:27.031356    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:27.247618    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:27.461273    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:28.045467    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:28.047369    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:28.047971    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:28.051667    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:28.248308    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:28.459028    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:28.539419    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:28.748543    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:28.943699    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:29.019299    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:29.266070    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:29.448893    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:29.526892    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:29.765721    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:29.944943    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:30.034251    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:30.253178    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:30.448682    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:30.528778    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:30.750128    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:30.954918    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:31.026797    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:31.256220    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:31.443032    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:31.522935    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:31.766783    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:31.942808    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:32.019804    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:32.646890    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:32.648922    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:32.649047    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:32.752388    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:32.950687    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:33.026202    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:33.261312    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:33.448456    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:33.542522    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:33.752652    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:33.950161    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:34.026729    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:34.255365    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:34.454399    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:34.531341    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:35.061318    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:35.061865    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:35.076396    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:35.269963    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:35.463524    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:35.537791    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:35.761318    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:35.953609    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:36.027943    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:36.249213    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:36.454233    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:36.531408    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:36.748719    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:36.942455    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:37.022695    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:37.262985    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:37.449584    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:37.529955    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:37.746533    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:37.958852    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:38.031880    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:38.262193    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:38.449679    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:38.525118    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:38.747417    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:38.956395    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:39.029166    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:39.259789    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:39.455225    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:39.531110    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:39.764285    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:39.950584    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:40.029777    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:40.251893    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:40.449019    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:40.520223    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:40.753090    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:40.945910    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:41.029393    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:41.559065    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:41.560281    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:41.561235    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:41.773250    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:41.946838    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:42.047477    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:42.249565    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:42.462987    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:42.520055    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:42.748402    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:42.952579    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:43.046178    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:43.255970    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:43.449573    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:43.519806    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:43.753531    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:43.944540    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:44.032231    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:44.266076    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:44.457588    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:44.537911    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:44.759565    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:44.954045    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:45.018719    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:45.249711    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:45.450514    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:45.541726    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:45.755954    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:45.955632    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:46.027747    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:46.255388    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:46.447832    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:46.520205    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:46.759676    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:46.948536    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:47.027498    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:47.267426    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:47.548792    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:47.549517    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:47.893047    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:47.951298    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:48.025612    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:48.263701    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:48.444209    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:48.526647    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:48.754839    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:48.948028    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:49.035297    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:49.257662    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:49.457482    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:49.530734    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:49.763416    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:49.950961    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:50.031331    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:50.267126    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:50.993881    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:50.999539    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:51.001955    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:51.009162    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:51.038199    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:51.249963    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:51.470406    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:51.525078    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:51.763163    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:51.944274    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:52.036910    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:52.253098    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:52.444362    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:52.523933    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:52.756103    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:52.958391    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:53.031106    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:53.255511    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:53.460500    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:53.525842    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:53.756650    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:53.954717    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:54.033654    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:54.248667    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:54.448700    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:54.540948    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:54.765308    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:54.956586    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:55.032276    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:55.263331    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:55.454002    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:55.521879    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:55.747763    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:55.961040    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:56.040976    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:56.258544    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:56.457514    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:56.528640    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:56.757545    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:56.963265    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:57.027545    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:57.261588    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:57.453574    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:57.521673    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:57.752388    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:57.952840    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:58.024996    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:58.597862    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:58.599695    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:58.602076    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:58.747702    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:58.960018    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:59.032962    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:59.274260    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:59.450609    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:43:59.533054    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:43:59.764332    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:43:59.958083    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:00.034856    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:00.268584    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:00.504843    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:00.524884    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:00.749014    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:00.951044    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:01.024142    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:01.245674    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:01.453561    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:01.524774    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:01.782525    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:01.949430    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:02.027556    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:02.260857    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:02.444550    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:02.530579    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:02.760349    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:02.963131    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:03.020697    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:03.288401    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:03.454606    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:03.528023    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:03.761558    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:03.949364    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:04.022061    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:04.261712    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:04.453372    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:04.519048    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:04.761036    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:04.955961    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:05.027747    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:05.246070    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:05.451425    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:05.532304    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:05.748778    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:05.946287    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:06.021280    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:06.248049    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:06.449148    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:06.525232    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:06.763962    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:07.348307    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:07.359236    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:07.364279    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:07.455346    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:07.536570    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:07.751382    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 22:44:07.948473    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:08.033278    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:08.266379    7476 kapi.go:107] duration metric: took 1m59.5276383s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 22:44:08.454020    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:08.532376    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:08.952339    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:09.037023    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:09.457663    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:09.543865    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:09.956813    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:10.029247    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:10.454792    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:10.522199    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:10.945031    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:11.028256    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:11.444359    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:11.526983    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:11.954606    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:12.022539    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:12.446061    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:12.528409    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:12.954562    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:13.032624    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:13.458311    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:13.525646    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:13.954735    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:14.034763    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:14.456383    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:14.522946    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:14.952090    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:15.021492    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:15.447115    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:15.530100    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:15.956451    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:16.024041    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:16.446153    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:16.529039    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:16.951590    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:17.034216    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:17.445384    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:17.528569    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:17.951809    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:18.019132    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:18.454411    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:18.524539    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:18.951384    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:19.045812    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:19.458859    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:19.518807    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:19.949672    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:20.036184    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:20.454803    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:20.520653    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:20.950549    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:21.020765    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:21.448218    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:21.529767    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:21.953439    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:22.040893    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:22.449645    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:22.526746    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:22.950425    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:23.030257    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:23.444534    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:23.524313    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:23.956573    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:24.103201    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:24.451270    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:24.528213    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:25.092092    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:25.096203    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:25.463816    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:25.528370    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:25.963695    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:26.036746    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:26.449305    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:26.530961    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:26.948374    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:27.035814    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:27.446514    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:27.522681    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:27.952577    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:28.026216    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:28.452075    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:28.531816    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:28.957641    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:29.036216    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:29.458548    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:29.535180    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:29.961048    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:30.018736    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:30.452979    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:30.522887    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:30.961593    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:31.024348    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:31.447522    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:31.520995    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:31.958557    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:32.039277    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:32.891530    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:32.892532    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:33.050958    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:33.051240    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:33.754302    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:33.758583    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:33.960993    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:34.034626    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:34.451720    7476 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 22:44:34.526909    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:34.951644    7476 kapi.go:107] duration metric: took 2m28.0185517s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 22:44:35.036266    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:35.532990    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:36.029404    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:36.524703    7476 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 22:44:37.042422    7476 kapi.go:107] duration metric: took 2m25.5278103s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 22:44:37.045528    7476 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-723800 cluster.
	I0307 22:44:37.047750    7476 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 22:44:37.051013    7476 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 22:44:37.052924    7476 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, yakd, metrics-server, helm-tiller, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0307 22:44:37.057763    7476 addons.go:505] duration metric: took 2m59.7166289s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns yakd metrics-server helm-tiller inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0307 22:44:37.057763    7476 start.go:245] waiting for cluster config update ...
	I0307 22:44:37.057763    7476 start.go:254] writing updated cluster config ...
	I0307 22:44:37.073749    7476 ssh_runner.go:195] Run: rm -f paused
	I0307 22:44:37.263230    7476 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 22:44:37.266640    7476 out.go:177] * Done! kubectl is now configured to use "addons-723800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 22:45:28 addons-723800 dockerd[1309]: time="2024-03-07T22:45:28.605313708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 22:45:28 addons-723800 dockerd[1309]: time="2024-03-07T22:45:28.605338308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 22:45:28 addons-723800 dockerd[1309]: time="2024-03-07T22:45:28.605506009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 22:45:28 addons-723800 cri-dockerd[1197]: time="2024-03-07T22:45:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f7c2629a9c491e65215c838083102c94352e4b8cb2f672c10285ed93140904d1/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.152843362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.155202782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.155510185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.156114290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.319424627Z" level=info msg="shim disconnected" id=a454b56eccbbbc8f6065af7135587072c3dae72c7b259c59839c5abb3170ae3f namespace=moby
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.319719030Z" level=warning msg="cleaning up after shim disconnected" id=a454b56eccbbbc8f6065af7135587072c3dae72c7b259c59839c5abb3170ae3f namespace=moby
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.319795130Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 22:45:29 addons-723800 dockerd[1303]: time="2024-03-07T22:45:29.324595173Z" level=info msg="ignoring event" container=a454b56eccbbbc8f6065af7135587072c3dae72c7b259c59839c5abb3170ae3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 22:45:29 addons-723800 dockerd[1309]: time="2024-03-07T22:45:29.343745241Z" level=warning msg="cleanup warnings time=\"2024-03-07T22:45:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Mar 07 22:45:31 addons-723800 dockerd[1303]: time="2024-03-07T22:45:31.470316527Z" level=info msg="ignoring event" container=f7c2629a9c491e65215c838083102c94352e4b8cb2f672c10285ed93140904d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 22:45:31 addons-723800 dockerd[1309]: time="2024-03-07T22:45:31.471351836Z" level=info msg="shim disconnected" id=f7c2629a9c491e65215c838083102c94352e4b8cb2f672c10285ed93140904d1 namespace=moby
	Mar 07 22:45:31 addons-723800 dockerd[1309]: time="2024-03-07T22:45:31.472967250Z" level=warning msg="cleaning up after shim disconnected" id=f7c2629a9c491e65215c838083102c94352e4b8cb2f672c10285ed93140904d1 namespace=moby
	Mar 07 22:45:31 addons-723800 dockerd[1309]: time="2024-03-07T22:45:31.473148552Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1303]: time="2024-03-07T22:45:32.673063673Z" level=info msg="ignoring event" container=c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.673866180Z" level=info msg="shim disconnected" id=c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276 namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.673947480Z" level=warning msg="cleaning up after shim disconnected" id=c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276 namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.673961781Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1303]: time="2024-03-07T22:45:32.844164072Z" level=info msg="ignoring event" container=ebfb92e050ec02ab948834366637b446ac1df4b4e5cf660dd86325bb0f5cbdc3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.846511493Z" level=info msg="shim disconnected" id=ebfb92e050ec02ab948834366637b446ac1df4b4e5cf660dd86325bb0f5cbdc3 namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.846566993Z" level=warning msg="cleaning up after shim disconnected" id=ebfb92e050ec02ab948834366637b446ac1df4b4e5cf660dd86325bb0f5cbdc3 namespace=moby
	Mar 07 22:45:32 addons-723800 dockerd[1309]: time="2024-03-07T22:45:32.846578593Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a454b56eccbbb       a416a98b71e22                                                                                                                                5 seconds ago        Exited              helper-pod                               0                   f7c2629a9c491       helper-pod-delete-pvc-9400bf85-94ed-489b-a648-5551c6e089a1
	f456cf1857757       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                                      6 seconds ago        Running             hello-world-app                          0                   0e8653324a1e2       hello-world-app-5d77478584-d6npt
	4e72f3b008133       busybox@sha256:650fd573e056b679a5110a70aabeb01e26b76e545ec4b9c70a9523f2dfaf18c6                                                              19 seconds ago       Exited              busybox                                  0                   abeba0e384694       test-local-path
	d4cd0160b917a       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                                28 seconds ago       Running             nginx                                    0                   281f3feddfa42       nginx
	5740ab226db41       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                                 57 seconds ago       Running             gcp-auth                                 0                   8ec06ac91e4f5       gcp-auth-5f6b4f85fd-xtm94
	4a3819caf9089       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             About a minute ago   Running             controller                               0                   f0733dcd79b82       ingress-nginx-controller-76dc478dd8-dhwzh
	e969c424caad1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	1f03380844b7d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	d40bc67374853       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	fb0e1eda3142b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	16e3e44e81588       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	80c07609df6ff       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   f655a4501c462       csi-hostpath-attacher-0
	7b42764c92e21       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   4c1653f582fd8       csi-hostpathplugin-v2wbg
	abf834a51ae19       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   6dd53c8943964       csi-hostpath-resizer-0
	855905354c5d8       b29d748098e32                                                                                                                                About a minute ago   Exited              patch                                    1                   14866eb67d814       ingress-nginx-admission-patch-rwnsm
	75486494c7ba6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334                   About a minute ago   Exited              create                                   0                   2bb28c368eea8       ingress-nginx-admission-create-wkrxc
	dfff5eb6ce0e7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   4d1271521a2aa       snapshot-controller-58dbcc7b99-7d5jj
	9ceffabcc3c17       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   5e9c1fe049fb5       local-path-provisioner-78b46b4d5c-qhst9
	147209e5a81c8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   e952cc5bcde57       snapshot-controller-58dbcc7b99-slp74
	4939490b64885       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   bd4d18b80bab0       yakd-dashboard-9947fc6bf-64vpb
	4b0fdf8d87393       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   bfbc7c972d3ee       kube-ingress-dns-minikube
	c03617c94f279       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                                     3 minutes ago        Exited              nvidia-device-plugin-ctr                 0                   ebfb92e050ec0       nvidia-device-plugin-daemonset-wthv5
	b9984b3767870       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               3 minutes ago        Running             cloud-spanner-emulator                   0                   add53bf49037c       cloud-spanner-emulator-6548d5df46-c972l
	96f170a30b1e4       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   2882c12c5796b       storage-provisioner
	eb5f4c12b9b0b       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   0c36112635141       coredns-5dd5756b68-8j9qj
	cba27fd70da5a       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   9b6aa6572fa23       kube-proxy-qs82f
	9be5640222dc2       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   8cb2b49f6b579       kube-controller-manager-addons-723800
	f958f0e4b3b0f       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   ee0c63702a23a       etcd-addons-723800
	79a5ec30b0247       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   702c6ce3c80e1       kube-apiserver-addons-723800
	1be58b58940fa       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   0b5d23b325a3f       kube-scheduler-addons-723800
	
	
	==> controller_ingress [4a3819caf908] <==
	I0307 22:44:35.469125       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0307 22:44:35.469322       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-dhwzh", UID:"59fbda9c-5dac-4f59-bbc6-c91bf326f08b", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0307 22:44:59.424502       7 controller.go:1108] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0307 22:44:59.453733       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.029s renderingIngressLength:1 renderingIngressTime:0s admissionTime:17.8kBs testedConfigurationSize:0.029}
	I0307 22:44:59.453839       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0307 22:44:59.462472       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0307 22:44:59.463382       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"ddc643f1-c1f7-446f-84af-d9e62e8ee2c7", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1367", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0307 22:45:01.801883       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	I0307 22:45:01.802042       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0307 22:45:01.870661       7 controller.go:210] "Backend successfully reloaded"
	I0307 22:45:01.871444       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-dhwzh", UID:"59fbda9c-5dac-4f59-bbc6-c91bf326f08b", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0307 22:45:05.135445       7 controller.go:1214] Service "default/nginx" does not have any active Endpoint.
	W0307 22:45:22.940814       7 controller.go:1108] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0307 22:45:23.048849       7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.108s renderingIngressLength:2 renderingIngressTime:0s admissionTime:25.7kBs testedConfigurationSize:0.108}
	I0307 22:45:23.048883       7 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0307 22:45:23.111547       7 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	W0307 22:45:23.112060       7 controller.go:1108] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0307 22:45:23.112266       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0307 22:45:23.116451       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"708c38e4-cdda-4253-a074-a07cf782e7a9", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1518", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0307 22:45:23.261174       7 controller.go:210] "Backend successfully reloaded"
	I0307 22:45:23.261950       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-dhwzh", UID:"59fbda9c-5dac-4f59-bbc6-c91bf326f08b", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0307 22:45:26.448591       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0307 22:45:26.536173       7 controller.go:210] "Backend successfully reloaded"
	I0307 22:45:26.536570       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-76dc478dd8-dhwzh", UID:"59fbda9c-5dac-4f59-bbc6-c91bf326f08b", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	10.244.0.1 - - [07/Mar/2024:22:45:22 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/8.5.0" 80 0.001 [default-nginx-80] [] 10.244.0.25:80 615 0.001 200 29d6f507fc70a182df7e2acc6aa4f384
	
	
	==> coredns [eb5f4c12b9b0] <==
	[INFO] 10.244.0.21:48164 - 16977 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000958s
	[INFO] 10.244.0.21:48164 - 49740 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000761205s
	[INFO] 10.244.0.21:48164 - 37552 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0001151s
	[INFO] 10.244.0.21:50890 - 58547 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000189801s
	[INFO] 10.244.0.21:50890 - 44151 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000695s
	[INFO] 10.244.0.21:48164 - 58352 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074001s
	[INFO] 10.244.0.21:50890 - 44180 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000759s
	[INFO] 10.244.0.21:48164 - 1205 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000334102s
	[INFO] 10.244.0.21:50890 - 29613 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171501s
	[INFO] 10.244.0.21:50890 - 14076 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000809s
	[INFO] 10.244.0.21:50890 - 59679 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072701s
	[INFO] 10.244.0.21:55709 - 32978 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092201s
	[INFO] 10.244.0.21:32843 - 30308 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085501s
	[INFO] 10.244.0.21:55709 - 23093 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073201s
	[INFO] 10.244.0.21:32843 - 19735 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083201s
	[INFO] 10.244.0.21:55709 - 53697 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084801s
	[INFO] 10.244.0.21:32843 - 40997 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000723s
	[INFO] 10.244.0.21:55709 - 19610 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000816s
	[INFO] 10.244.0.21:32843 - 60755 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000152601s
	[INFO] 10.244.0.21:55709 - 3696 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0001414s
	[INFO] 10.244.0.21:32843 - 19224 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000676s
	[INFO] 10.244.0.21:55709 - 17331 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000956s
	[INFO] 10.244.0.21:32843 - 55636 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000829805s
	[INFO] 10.244.0.21:55709 - 40396 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000946906s
	[INFO] 10.244.0.21:32843 - 10577 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000749004s
	
	
	==> describe nodes <==
	Name:               addons-723800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-723800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=addons-723800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T22_41_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-723800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-723800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 22:41:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-723800
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 22:45:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 22:45:31 +0000   Thu, 07 Mar 2024 22:41:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 22:45:31 +0000   Thu, 07 Mar 2024 22:41:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 22:45:31 +0000   Thu, 07 Mar 2024 22:41:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 22:45:31 +0000   Thu, 07 Mar 2024 22:41:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.63.241
	  Hostname:    addons-723800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 3541be139cfe4c54922b653df2e4e637
	  System UUID:                ae9ee45c-f47d-f44e-af1b-f22094e519f6
	  Boot ID:                    7fe68f60-43c9-4b04-bd39-5d4e98abf613
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-c972l      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  default                     hello-world-app-5d77478584-d6npt             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  gcp-auth                    gcp-auth-5f6b4f85fd-xtm94                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-dhwzh    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m27s
	  kube-system                 coredns-5dd5756b68-8j9qj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m55s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 csi-hostpathplugin-v2wbg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 etcd-addons-723800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-apiserver-addons-723800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-723800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-proxy-qs82f                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-scheduler-addons-723800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 snapshot-controller-58dbcc7b99-7d5jj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 snapshot-controller-58dbcc7b99-slp74         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  local-path-storage          local-path-provisioner-78b46b4d5c-qhst9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-64vpb               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m43s  kube-proxy       
	  Normal  Starting                 4m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet          Node addons-723800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet          Node addons-723800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet          Node addons-723800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m9s   kubelet          Node addons-723800 status is now: NodeReady
	  Normal  RegisteredNode           3m57s  node-controller  Node addons-723800 event: Registered Node addons-723800 in Controller
	
	
	==> dmesg <==
	[  +5.128252] kauditd_printk_skb: 15 callbacks suppressed
	[ +10.290402] kauditd_printk_skb: 33 callbacks suppressed
	[Mar 7 22:42] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.038530] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.249768] kauditd_printk_skb: 49 callbacks suppressed
	[ +13.614892] hrtimer: interrupt took 4106926 ns
	[Mar 7 22:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.688205] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.278439] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.045713] kauditd_printk_skb: 29 callbacks suppressed
	[ +12.766360] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.572136] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.111982] kauditd_printk_skb: 5 callbacks suppressed
	[Mar 7 22:44] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.994667] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.137290] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.048997] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.775248] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.538457] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.836230] kauditd_printk_skb: 1 callbacks suppressed
	[Mar 7 22:45] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.266180] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.351944] kauditd_printk_skb: 31 callbacks suppressed
	[  +8.357708] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.120598] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [f958f0e4b3b0] <==
	{"level":"warn","ts":"2024-03-07T22:44:25.107744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.781125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13828"}
	{"level":"info","ts":"2024-03-07T22:44:25.107785Z","caller":"traceutil/trace.go:171","msg":"trace[581690669] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1203; }","duration":"136.822626ms","start":"2024-03-07T22:44:24.970955Z","end":"2024-03-07T22:44:25.107778Z","steps":["trace[581690669] 'agreement among raft nodes before linearized reading'  (duration: 136.746125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:25.10814Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.281663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-after-finished-controller\" ","response":"range_response_count:1 size:236"}
	{"level":"info","ts":"2024-03-07T22:44:25.108186Z","caller":"traceutil/trace.go:171","msg":"trace[1422419820] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-after-finished-controller; range_end:; response_count:1; response_revision:1203; }","duration":"106.326164ms","start":"2024-03-07T22:44:25.001851Z","end":"2024-03-07T22:44:25.108177Z","steps":["trace[1422419820] 'agreement among raft nodes before linearized reading'  (duration: 106.264463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:32.903456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.952638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13828"}
	{"level":"info","ts":"2024-03-07T22:44:32.903789Z","caller":"traceutil/trace.go:171","msg":"trace[1362100748] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1223; }","duration":"437.30784ms","start":"2024-03-07T22:44:32.466465Z","end":"2024-03-07T22:44:32.903773Z","steps":["trace[1362100748] 'range keys from in-memory index tree'  (duration: 436.830336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:32.903912Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T22:44:32.466451Z","time spent":"437.44354ms","remote":"127.0.0.1:35932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13851,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-03-07T22:44:32.904578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.482281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-03-07T22:44:32.905401Z","caller":"traceutil/trace.go:171","msg":"trace[1061372043] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1223; }","duration":"359.287386ms","start":"2024-03-07T22:44:32.546085Z","end":"2024-03-07T22:44:32.905372Z","steps":["trace[1061372043] 'range keys from in-memory index tree'  (duration: 358.394881ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:32.909603Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T22:44:32.546069Z","time spent":"363.48751ms","remote":"127.0.0.1:35932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4176,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-03-07T22:44:33.458856Z","caller":"traceutil/trace.go:171","msg":"trace[1063603094] linearizableReadLoop","detail":"{readStateIndex:1278; appliedIndex:1277; }","duration":"248.073939ms","start":"2024-03-07T22:44:33.210667Z","end":"2024-03-07T22:44:33.458741Z","steps":["trace[1063603094] 'read index received'  (duration: 247.924738ms)","trace[1063603094] 'applied index is now lower than readState.Index'  (duration: 146.501µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T22:44:33.459449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.745043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-03-07T22:44:33.459549Z","caller":"traceutil/trace.go:171","msg":"trace[945254074] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1224; }","duration":"248.904244ms","start":"2024-03-07T22:44:33.210635Z","end":"2024-03-07T22:44:33.459539Z","steps":["trace[945254074] 'agreement among raft nodes before linearized reading'  (duration: 248.427442ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T22:44:33.460013Z","caller":"traceutil/trace.go:171","msg":"trace[294308838] transaction","detail":"{read_only:false; response_revision:1224; number_of_response:1; }","duration":"283.878647ms","start":"2024-03-07T22:44:33.17612Z","end":"2024-03-07T22:44:33.459998Z","steps":["trace[294308838] 'process raft request'  (duration: 282.522139ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T22:44:33.769688Z","caller":"traceutil/trace.go:171","msg":"trace[1730006079] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1278; }","duration":"299.237836ms","start":"2024-03-07T22:44:33.470431Z","end":"2024-03-07T22:44:33.769669Z","steps":["trace[1730006079] 'read index received'  (duration: 293.936005ms)","trace[1730006079] 'applied index is now lower than readState.Index'  (duration: 5.300631ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-07T22:44:33.769815Z","caller":"traceutil/trace.go:171","msg":"trace[43662655] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"302.643855ms","start":"2024-03-07T22:44:33.467161Z","end":"2024-03-07T22:44:33.769805Z","steps":["trace[43662655] 'process raft request'  (duration: 297.245324ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:33.769886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T22:44:33.467126Z","time spent":"302.709656ms","remote":"127.0.0.1:36012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1216 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-03-07T22:44:33.770639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.071994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4153"}
	{"level":"info","ts":"2024-03-07T22:44:33.772058Z","caller":"traceutil/trace.go:171","msg":"trace[1590202060] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1225; }","duration":"224.477402ms","start":"2024-03-07T22:44:33.547555Z","end":"2024-03-07T22:44:33.772032Z","steps":["trace[1590202060] 'agreement among raft nodes before linearized reading'  (duration: 223.022394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:33.770728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.307343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13828"}
	{"level":"info","ts":"2024-03-07T22:44:33.772496Z","caller":"traceutil/trace.go:171","msg":"trace[1814381891] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1225; }","duration":"302.071553ms","start":"2024-03-07T22:44:33.470415Z","end":"2024-03-07T22:44:33.772487Z","steps":["trace[1814381891] 'agreement among raft nodes before linearized reading'  (duration: 300.259542ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T22:44:33.772523Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T22:44:33.470407Z","time spent":"302.106153ms","remote":"127.0.0.1:35932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13851,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-03-07T22:44:33.770771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.199657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-03-07T22:44:33.772631Z","caller":"traceutil/trace.go:171","msg":"trace[24127467] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1225; }","duration":"184.057168ms","start":"2024-03-07T22:44:33.588567Z","end":"2024-03-07T22:44:33.772625Z","steps":["trace[24127467] 'agreement among raft nodes before linearized reading'  (duration: 182.173057ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T22:45:03.85851Z","caller":"traceutil/trace.go:171","msg":"trace[795589300] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1412; }","duration":"113.215239ms","start":"2024-03-07T22:45:03.74528Z","end":"2024-03-07T22:45:03.858495Z","steps":["trace[795589300] 'process raft request'  (duration: 112.884137ms)"],"step_count":1}
	
	
	==> gcp-auth [5740ab226db4] <==
	2024/03/07 22:44:36 GCP Auth Webhook started!
	2024/03/07 22:44:42 Ready to marshal response ...
	2024/03/07 22:44:42 Ready to write response ...
	2024/03/07 22:44:49 Ready to marshal response ...
	2024/03/07 22:44:49 Ready to write response ...
	2024/03/07 22:44:59 Ready to marshal response ...
	2024/03/07 22:44:59 Ready to write response ...
	2024/03/07 22:45:04 Ready to marshal response ...
	2024/03/07 22:45:04 Ready to write response ...
	2024/03/07 22:45:04 Ready to marshal response ...
	2024/03/07 22:45:04 Ready to write response ...
	2024/03/07 22:45:22 Ready to marshal response ...
	2024/03/07 22:45:22 Ready to write response ...
	2024/03/07 22:45:27 Ready to marshal response ...
	2024/03/07 22:45:27 Ready to write response ...
	
	
	==> kernel <==
	 22:45:34 up 6 min,  0 users,  load average: 2.05, 1.90, 0.89
	Linux addons-723800 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [79a5ec30b024] <==
	I0307 22:43:22.663240       1 trace.go:236] Trace[406682828]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.20.63.241,type:*v1.Endpoints,resource:apiServerIPInfo (07-Mar-2024 22:43:22.041) (total time: 621ms):
	Trace[406682828]: ---"Transaction prepared" 168ms (22:43:22.211)
	Trace[406682828]: ---"Txn call completed" 451ms (22:43:22.663)
	Trace[406682828]: [621.303126ms] [621.303126ms] END
	I0307 22:43:26.813159       1 trace.go:236] Trace[1618850401]: "List" accept:application/json, */*,audit-id:26010b57-247a-4f1d-a74b-61bdccc9ddb7,client:172.20.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (07-Mar-2024 22:43:26.271) (total time: 541ms):
	Trace[1618850401]: ["List(recursive=true) etcd3" audit-id:26010b57-247a-4f1d-a74b-61bdccc9ddb7,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 541ms (22:43:26.271)]
	Trace[1618850401]: [541.697435ms] [541.697435ms] END
	I0307 22:43:28.059442       1 trace.go:236] Trace[934893407]: "Update" accept:application/json, */*,audit-id:6cf0629d-89da-4c32-898c-1ab7dcf0efcd,client:172.20.63.241,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-Mar-2024 22:43:27.485) (total time: 573ms):
	Trace[934893407]: ["GuaranteedUpdate etcd3" audit-id:6cf0629d-89da-4c32-898c-1ab7dcf0efcd,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 573ms (22:43:27.485)
	Trace[934893407]:  ---"Txn call completed" 572ms (22:43:28.059)]
	Trace[934893407]: [573.804224ms] [573.804224ms] END
	I0307 22:43:28.065652       1 trace.go:236] Trace[1577726023]: "List" accept:application/json, */*,audit-id:5c7c8288-5638-4779-9875-8e9bd9976076,client:172.20.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (07-Mar-2024 22:43:27.548) (total time: 517ms):
	Trace[1577726023]: ["List(recursive=true) etcd3" audit-id:5c7c8288-5638-4779-9875-8e9bd9976076,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 517ms (22:43:27.548)]
	Trace[1577726023]: [517.215985ms] [517.215985ms] END
	I0307 22:43:51.021882       1 trace.go:236] Trace[1986076273]: "List" accept:application/json, */*,audit-id:bcfa0aee-e889-4b14-abd2-f71f1a4ac6da,client:172.20.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (07-Mar-2024 22:43:50.466) (total time: 552ms):
	Trace[1986076273]: ["List(recursive=true) etcd3" audit-id:bcfa0aee-e889-4b14-abd2-f71f1a4ac6da,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 555ms (22:43:50.466)]
	Trace[1986076273]: [552.567626ms] [552.567626ms] END
	I0307 22:44:19.711110       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0307 22:44:58.540378       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0307 22:44:58.565771       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0307 22:44:59.455129       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	W0307 22:44:59.666215       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 22:44:59.844515       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.54.180"}
	I0307 22:45:20.352291       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0307 22:45:23.325403       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.98.47"}
	
	
	==> kube-controller-manager [9be5640222dc] <==
	E0307 22:45:00.578629       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0307 22:45:02.790428       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 22:45:02.790482       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 22:45:03.463540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.3µs"
	I0307 22:45:04.238217       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0307 22:45:04.635320       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 22:45:06.614486       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 22:45:07.173563       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0307 22:45:07.173683       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 22:45:07.504890       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0307 22:45:07.505293       1 shared_informer.go:318] Caches are synced for garbage collector
	W0307 22:45:07.676370       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 22:45:07.676671       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 22:45:09.034364       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0307 22:45:11.492317       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.6µs"
	W0307 22:45:18.124148       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 22:45:18.124676       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 22:45:22.833352       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0307 22:45:22.879362       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-d6npt"
	I0307 22:45:22.926917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="98.476321ms"
	I0307 22:45:22.964338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.350135ms"
	I0307 22:45:22.964455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.3µs"
	I0307 22:45:23.002503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="137.901µs"
	I0307 22:45:28.118413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.130177ms"
	I0307 22:45:28.119850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="1.400209ms"
	
	
	==> kube-proxy [cba27fd70da5] <==
	I0307 22:41:49.534162       1 server_others.go:69] "Using iptables proxy"
	I0307 22:41:49.682174       1 node.go:141] Successfully retrieved node IP: 172.20.63.241
	I0307 22:41:50.001146       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 22:41:50.001262       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 22:41:50.017343       1 server_others.go:152] "Using iptables Proxier"
	I0307 22:41:50.030896       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 22:41:50.033612       1 server.go:846] "Version info" version="v1.28.4"
	I0307 22:41:50.034670       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 22:41:50.046147       1 config.go:188] "Starting service config controller"
	I0307 22:41:50.046527       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 22:41:50.046886       1 config.go:97] "Starting endpoint slice config controller"
	I0307 22:41:50.047085       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 22:41:50.048110       1 config.go:315] "Starting node config controller"
	I0307 22:41:50.048360       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 22:41:50.147428       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 22:41:50.157696       1 shared_informer.go:318] Caches are synced for service config
	I0307 22:41:50.158898       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1be58b58940f] <==
	W0307 22:41:20.756087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 22:41:20.756171       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 22:41:20.780618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 22:41:20.780817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0307 22:41:20.808654       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 22:41:20.808714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 22:41:20.883483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 22:41:20.883776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 22:41:20.906175       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 22:41:20.906196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0307 22:41:20.978780       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 22:41:20.978807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 22:41:21.065897       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 22:41:21.066033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 22:41:21.120229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 22:41:21.120416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 22:41:21.231051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 22:41:21.231077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 22:41:21.305096       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 22:41:21.305124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 22:41:21.322562       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 22:41:21.322793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 22:41:21.399103       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 22:41:21.399462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0307 22:41:23.370245       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.775443    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/edd9e0bc-1230-4d22-bf51-714d91e52b68-script\") pod \"edd9e0bc-1230-4d22-bf51-714d91e52b68\" (UID: \"edd9e0bc-1230-4d22-bf51-714d91e52b68\") "
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.775568    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-data\") pod \"edd9e0bc-1230-4d22-bf51-714d91e52b68\" (UID: \"edd9e0bc-1230-4d22-bf51-714d91e52b68\") "
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.775615    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-gcp-creds\") pod \"edd9e0bc-1230-4d22-bf51-714d91e52b68\" (UID: \"edd9e0bc-1230-4d22-bf51-714d91e52b68\") "
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.775681    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mxg6\" (UniqueName: \"kubernetes.io/projected/edd9e0bc-1230-4d22-bf51-714d91e52b68-kube-api-access-4mxg6\") pod \"edd9e0bc-1230-4d22-bf51-714d91e52b68\" (UID: \"edd9e0bc-1230-4d22-bf51-714d91e52b68\") "
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.777076    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd9e0bc-1230-4d22-bf51-714d91e52b68-script" (OuterVolumeSpecName: "script") pod "edd9e0bc-1230-4d22-bf51-714d91e52b68" (UID: "edd9e0bc-1230-4d22-bf51-714d91e52b68"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.777312    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-data" (OuterVolumeSpecName: "data") pod "edd9e0bc-1230-4d22-bf51-714d91e52b68" (UID: "edd9e0bc-1230-4d22-bf51-714d91e52b68"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.777531    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "edd9e0bc-1230-4d22-bf51-714d91e52b68" (UID: "edd9e0bc-1230-4d22-bf51-714d91e52b68"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.778920    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd9e0bc-1230-4d22-bf51-714d91e52b68-kube-api-access-4mxg6" (OuterVolumeSpecName: "kube-api-access-4mxg6") pod "edd9e0bc-1230-4d22-bf51-714d91e52b68" (UID: "edd9e0bc-1230-4d22-bf51-714d91e52b68"). InnerVolumeSpecName "kube-api-access-4mxg6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.877179    2755 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/edd9e0bc-1230-4d22-bf51-714d91e52b68-script\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.877563    2755 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-data\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.877670    2755 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/edd9e0bc-1230-4d22-bf51-714d91e52b68-gcp-creds\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:31 addons-723800 kubelet[2755]: I0307 22:45:31.877704    2755 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4mxg6\" (UniqueName: \"kubernetes.io/projected/edd9e0bc-1230-4d22-bf51-714d91e52b68-kube-api-access-4mxg6\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:32 addons-723800 kubelet[2755]: I0307 22:45:32.400688    2755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c2629a9c491e65215c838083102c94352e4b8cb2f672c10285ed93140904d1"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.088346    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/143a4a10-8313-40ab-a7f6-613f980a9728-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "143a4a10-8313-40ab-a7f6-613f980a9728" (UID: "143a4a10-8313-40ab-a7f6-613f980a9728"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.088469    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/143a4a10-8313-40ab-a7f6-613f980a9728-device-plugin\") pod \"143a4a10-8313-40ab-a7f6-613f980a9728\" (UID: \"143a4a10-8313-40ab-a7f6-613f980a9728\") "
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.088585    2755 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8zp8\" (UniqueName: \"kubernetes.io/projected/143a4a10-8313-40ab-a7f6-613f980a9728-kube-api-access-c8zp8\") pod \"143a4a10-8313-40ab-a7f6-613f980a9728\" (UID: \"143a4a10-8313-40ab-a7f6-613f980a9728\") "
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.090440    2755 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/143a4a10-8313-40ab-a7f6-613f980a9728-device-plugin\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.097230    2755 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143a4a10-8313-40ab-a7f6-613f980a9728-kube-api-access-c8zp8" (OuterVolumeSpecName: "kube-api-access-c8zp8") pod "143a4a10-8313-40ab-a7f6-613f980a9728" (UID: "143a4a10-8313-40ab-a7f6-613f980a9728"). InnerVolumeSpecName "kube-api-access-c8zp8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.191643    2755 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c8zp8\" (UniqueName: \"kubernetes.io/projected/143a4a10-8313-40ab-a7f6-613f980a9728-kube-api-access-c8zp8\") on node \"addons-723800\" DevicePath \"\""
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.451913    2755 scope.go:117] "RemoveContainer" containerID="c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.526341    2755 scope.go:117] "RemoveContainer" containerID="c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: E0307 22:45:33.535322    2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276" containerID="c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.535391    2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276"} err="failed to get container status \"c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276\": rpc error: code = Unknown desc = Error response from daemon: No such container: c03617c94f2797bf6c1d4bb740a53a966ba01ebcf7ee9f55205f1edde8978276"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.972559    2755 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="143a4a10-8313-40ab-a7f6-613f980a9728" path="/var/lib/kubelet/pods/143a4a10-8313-40ab-a7f6-613f980a9728/volumes"
	Mar 07 22:45:33 addons-723800 kubelet[2755]: I0307 22:45:33.973456    2755 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="edd9e0bc-1230-4d22-bf51-714d91e52b68" path="/var/lib/kubelet/pods/edd9e0bc-1230-4d22-bf51-714d91e52b68/volumes"
	
	
	==> storage-provisioner [96f170a30b1e] <==
	I0307 22:42:07.269495       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 22:42:07.333345       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 22:42:07.333391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 22:42:07.441885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 22:42:07.444968       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09a7236f-40a5-4787-a6d2-8d0e65fdb6e7", APIVersion:"v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-723800_70f17f96-49f1-46af-8b21-7952c4d0e18a became leader
	I0307 22:42:07.445352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-723800_70f17f96-49f1-46af-8b21-7952c4d0e18a!
	I0307 22:42:07.645793       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-723800_70f17f96-49f1-46af-8b21-7952c4d0e18a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:45:24.323960    2120 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-723800 -n addons-723800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-723800 -n addons-723800: (12.9436088s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-723800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wkrxc ingress-nginx-admission-patch-rwnsm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-723800 describe pod ingress-nginx-admission-create-wkrxc ingress-nginx-admission-patch-rwnsm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-723800 describe pod ingress-nginx-admission-create-wkrxc ingress-nginx-admission-patch-rwnsm: exit status 1 (212.0326ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wkrxc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rwnsm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-723800 describe pod ingress-nginx-admission-create-wkrxc ingress-nginx-admission-patch-rwnsm: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.82s)

                                                
                                    
x
+
TestErrorSpam/setup (172.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-267700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 --driver=hyperv
E0307 22:49:37.328786    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.343959    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.359322    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.390334    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.437781    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.532725    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:37.706878    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:38.040557    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:38.694591    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:39.985759    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:42.557601    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:47.682492    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:49:57.936912    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:50:18.431520    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:50:59.396147    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 22:52:21.328963    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-267700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 --driver=hyperv: (2m52.2231105s)
error_spam_test.go:96: unexpected stderr: "W0307 22:49:36.543690   13232 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-267700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=16214
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-267700" primary control-plane node in "nospam-267700" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-267700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0307 22:49:36.543690   13232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (172.22s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (30.91s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-934300 -n functional-934300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-934300 -n functional-934300: (10.9079862s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 logs -n 25: (7.9368263s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:53 UTC | 07 Mar 24 22:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:53 UTC | 07 Mar 24 22:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:53 UTC | 07 Mar 24 22:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:53 UTC | 07 Mar 24 22:53 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:53 UTC | 07 Mar 24 22:54 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:54 UTC | 07 Mar 24 22:54 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-267700 --log_dir                                     | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:54 UTC | 07 Mar 24 22:54 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-267700                                            | nospam-267700     | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:54 UTC | 07 Mar 24 22:55 UTC |
	| start   | -p functional-934300                                        | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:55 UTC | 07 Mar 24 22:58 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-934300                                        | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:58 UTC | 07 Mar 24 23:00 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache add                                 | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:00 UTC | 07 Mar 24 23:00 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache add                                 | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:00 UTC | 07 Mar 24 23:00 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache add                                 | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:00 UTC | 07 Mar 24 23:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache add                                 | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | minikube-local-cache-test:functional-934300                 |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache delete                              | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | minikube-local-cache-test:functional-934300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	| ssh     | functional-934300 ssh sudo                                  | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-934300                                           | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-934300 ssh                                       | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-934300 cache reload                              | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	| ssh     | functional-934300 ssh                                       | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-934300 kubectl --                                | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:01 UTC | 07 Mar 24 23:01 UTC |
	|         | --context functional-934300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:58:41
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:58:41.312069   13728 out.go:291] Setting OutFile to fd 412 ...
	I0307 22:58:41.313022   13728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:58:41.313075   13728 out.go:304] Setting ErrFile to fd 716...
	I0307 22:58:41.313075   13728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:58:41.327874   13728 out.go:298] Setting JSON to false
	I0307 22:58:41.334197   13728 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11275,"bootTime":1709841045,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 22:58:41.334197   13728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 22:58:41.342227   13728 out.go:177] * [functional-934300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 22:58:41.342525   13728 notify.go:220] Checking for updates...
	I0307 22:58:41.348299   13728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:58:41.350953   13728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 22:58:41.353446   13728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 22:58:41.355759   13728 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 22:58:41.357619   13728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 22:58:41.359554   13728 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 22:58:41.359554   13728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:58:46.084501   13728 out.go:177] * Using the hyperv driver based on existing profile
	I0307 22:58:46.087988   13728 start.go:297] selected driver: hyperv
	I0307 22:58:46.087988   13728 start.go:901] validating driver "hyperv" against &{Name:functional-934300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:functional-934300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.27 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:58:46.087988   13728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 22:58:46.138470   13728 cni.go:84] Creating CNI manager for ""
	I0307 22:58:46.138577   13728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:58:46.138848   13728 start.go:340] cluster config:
	{Name:functional-934300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-934300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.27 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:58:46.138848   13728 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:58:46.143381   13728 out.go:177] * Starting "functional-934300" primary control-plane node in "functional-934300" cluster
	I0307 22:58:46.145798   13728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 22:58:46.145798   13728 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 22:58:46.145798   13728 cache.go:56] Caching tarball of preloaded images
	I0307 22:58:46.145798   13728 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 22:58:46.145798   13728 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 22:58:46.147019   13728 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\config.json ...
	I0307 22:58:46.149385   13728 start.go:360] acquireMachinesLock for functional-934300: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 22:58:46.149385   13728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-934300"
	I0307 22:58:46.149385   13728 start.go:96] Skipping create...Using existing machine configuration
	I0307 22:58:46.149385   13728 fix.go:54] fixHost starting: 
	I0307 22:58:46.150215   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:58:48.544535   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:58:48.544535   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:48.555109   13728 fix.go:112] recreateIfNeeded on functional-934300: state=Running err=<nil>
	W0307 22:58:48.555109   13728 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 22:58:48.559532   13728 out.go:177] * Updating the running hyperv "functional-934300" VM ...
	I0307 22:58:48.562102   13728 machine.go:94] provisionDockerMachine start ...
	I0307 22:58:48.562102   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:58:50.411023   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:58:50.411023   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:50.411023   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:58:52.656341   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:58:52.656341   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:52.662548   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:58:52.662973   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:58:52.662973   13728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 22:58:52.802214   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-934300
	
	I0307 22:58:52.802214   13728 buildroot.go:166] provisioning hostname "functional-934300"
	I0307 22:58:52.802214   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:58:54.718752   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:58:54.718820   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:54.718820   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:58:56.936419   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:58:56.937951   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:56.942666   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:58:56.943246   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:58:56.943246   13728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-934300 && echo "functional-934300" | sudo tee /etc/hostname
	I0307 22:58:57.103168   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-934300
	
	I0307 22:58:57.103168   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:58:58.995319   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:58:58.995319   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:58:59.007121   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:01.279361   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:01.279361   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:01.295453   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:01.296071   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:01.296123   13728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-934300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-934300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-934300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 22:59:01.438880   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 22:59:01.438880   13728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 22:59:01.438880   13728 buildroot.go:174] setting up certificates
	I0307 22:59:01.438880   13728 provision.go:84] configureAuth start
	I0307 22:59:01.438880   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:03.291086   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:03.291086   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:03.301658   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:05.526038   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:05.526038   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:05.526246   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:07.422167   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:07.422167   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:07.433320   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:09.641035   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:09.649791   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:09.649868   13728 provision.go:143] copyHostCerts
	I0307 22:59:09.649868   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 22:59:09.649868   13728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 22:59:09.649868   13728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 22:59:09.650659   13728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 22:59:09.651753   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 22:59:09.651941   13728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 22:59:09.652010   13728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 22:59:09.652343   13728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 22:59:09.653443   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 22:59:09.653701   13728 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 22:59:09.653701   13728 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 22:59:09.654027   13728 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 22:59:09.655096   13728 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-934300 san=[127.0.0.1 172.20.58.27 functional-934300 localhost minikube]
	I0307 22:59:10.020650   13728 provision.go:177] copyRemoteCerts
	I0307 22:59:10.030902   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 22:59:10.030902   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:11.928971   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:11.938017   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:11.938017   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:14.127474   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:14.127474   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:14.137377   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 22:59:14.246989   13728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2159818s)
	I0307 22:59:14.247037   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 22:59:14.247037   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 22:59:14.292963   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 22:59:14.293566   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0307 22:59:14.331330   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 22:59:14.331418   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 22:59:14.375310   13728 provision.go:87] duration metric: took 12.9363085s to configureAuth
	I0307 22:59:14.375310   13728 buildroot.go:189] setting minikube options for container-runtime
	I0307 22:59:14.376114   13728 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 22:59:14.376114   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:16.250932   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:16.250987   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:16.250987   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:18.453804   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:18.453804   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:18.467925   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:18.468615   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:18.468615   13728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 22:59:18.599699   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 22:59:18.600237   13728 buildroot.go:70] root file system type: tmpfs
	I0307 22:59:18.600534   13728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 22:59:18.600587   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:20.437482   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:20.437541   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:20.437541   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:22.663517   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:22.678959   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:22.684280   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:22.684409   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:22.684409   13728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 22:59:22.844167   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 22:59:22.844273   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:24.704335   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:24.704335   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:24.704335   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:26.900004   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:26.900004   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:26.914156   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:26.915105   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:26.915185   13728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 22:59:27.057027   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 22:59:27.057027   13728 machine.go:97] duration metric: took 38.4945628s to provisionDockerMachine
	I0307 22:59:27.057027   13728 start.go:293] postStartSetup for "functional-934300" (driver="hyperv")
	I0307 22:59:27.057027   13728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 22:59:27.068670   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 22:59:27.068670   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:28.948703   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:28.958450   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:28.958523   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:31.184859   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:31.184913   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:31.184913   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 22:59:31.286357   13728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2176469s)
	I0307 22:59:31.297006   13728 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 22:59:31.303814   13728 command_runner.go:130] > NAME=Buildroot
	I0307 22:59:31.303814   13728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0307 22:59:31.303814   13728 command_runner.go:130] > ID=buildroot
	I0307 22:59:31.303814   13728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0307 22:59:31.303814   13728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0307 22:59:31.303814   13728 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 22:59:31.303814   13728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 22:59:31.304451   13728 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 22:59:31.305076   13728 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 22:59:31.305076   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 22:59:31.305839   13728 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\8324\hosts -> hosts in /etc/test/nested/copy/8324
	I0307 22:59:31.305839   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\8324\hosts -> /etc/test/nested/copy/8324/hosts
	I0307 22:59:31.318092   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8324
	I0307 22:59:31.333679   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 22:59:31.373613   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\8324\hosts --> /etc/test/nested/copy/8324/hosts (40 bytes)
	I0307 22:59:31.413150   13728 start.go:296] duration metric: took 4.3560819s for postStartSetup
	I0307 22:59:31.413150   13728 fix.go:56] duration metric: took 45.2633393s for fixHost
	I0307 22:59:31.413150   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:33.213780   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:33.213780   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:33.213780   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:35.424615   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:35.424615   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:35.440783   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:35.441498   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:35.441498   13728 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 22:59:35.574124   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709852375.584003605
	
	I0307 22:59:35.574124   13728 fix.go:216] guest clock: 1709852375.584003605
	I0307 22:59:35.574124   13728 fix.go:229] Guest: 2024-03-07 22:59:35.584003605 +0000 UTC Remote: 2024-03-07 22:59:31.4131502 +0000 UTC m=+50.273040201 (delta=4.170853405s)
	I0307 22:59:35.574124   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:37.381590   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:37.381590   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:37.390724   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:39.566499   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:39.566499   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:39.572079   13728 main.go:141] libmachine: Using SSH client type: native
	I0307 22:59:39.572152   13728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.27 22 <nil> <nil>}
	I0307 22:59:39.572152   13728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709852375
	I0307 22:59:39.718762   13728 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 22:59:35 UTC 2024
	
	I0307 22:59:39.718762   13728 fix.go:236] clock set: Thu Mar  7 22:59:35 UTC 2024
	 (err=<nil>)
	I0307 22:59:39.718762   13728 start.go:83] releasing machines lock for "functional-934300", held for 53.5688732s
	I0307 22:59:39.718762   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:41.535216   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:41.535273   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:41.535273   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:43.727303   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:43.727303   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:43.742945   13728 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 22:59:43.743069   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:43.757712   13728 ssh_runner.go:195] Run: cat /version.json
	I0307 22:59:43.757823   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 22:59:45.698916   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:45.707831   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:45.707831   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:45.708600   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 22:59:45.708600   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:45.708600   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 22:59:48.075653   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:48.075653   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:48.075653   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 22:59:48.101186   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 22:59:48.101186   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 22:59:48.107972   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 22:59:48.233601   13728 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0307 22:59:48.233723   13728 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0307 22:59:48.233723   13728 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4906133s)
	I0307 22:59:48.233723   13728 ssh_runner.go:235] Completed: cat /version.json: (4.4759687s)
	I0307 22:59:48.243165   13728 ssh_runner.go:195] Run: systemctl --version
	I0307 22:59:48.253295   13728 command_runner.go:130] > systemd 252 (252)
	I0307 22:59:48.253428   13728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0307 22:59:48.266704   13728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 22:59:48.277602   13728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0307 22:59:48.278387   13728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 22:59:48.288444   13728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 22:59:48.304046   13728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 22:59:48.304046   13728 start.go:494] detecting cgroup driver to use...
	I0307 22:59:48.304046   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:59:48.332868   13728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0307 22:59:48.344664   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 22:59:48.374447   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 22:59:48.376338   13728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 22:59:48.401051   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 22:59:48.427714   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:59:48.460542   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 22:59:48.487119   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:59:48.513682   13728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 22:59:48.541238   13728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 22:59:48.568726   13728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 22:59:48.584310   13728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0307 22:59:48.596758   13728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 22:59:48.621048   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:59:48.845588   13728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 22:59:48.874073   13728 start.go:494] detecting cgroup driver to use...
	I0307 22:59:48.886354   13728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 22:59:48.906744   13728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0307 22:59:48.906770   13728 command_runner.go:130] > [Unit]
	I0307 22:59:48.906770   13728 command_runner.go:130] > Description=Docker Application Container Engine
	I0307 22:59:48.906770   13728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0307 22:59:48.906770   13728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0307 22:59:48.906770   13728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0307 22:59:48.906770   13728 command_runner.go:130] > StartLimitBurst=3
	I0307 22:59:48.906770   13728 command_runner.go:130] > StartLimitIntervalSec=60
	I0307 22:59:48.906770   13728 command_runner.go:130] > [Service]
	I0307 22:59:48.906770   13728 command_runner.go:130] > Type=notify
	I0307 22:59:48.906770   13728 command_runner.go:130] > Restart=on-failure
	I0307 22:59:48.906770   13728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0307 22:59:48.906770   13728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0307 22:59:48.906770   13728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0307 22:59:48.906770   13728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0307 22:59:48.906770   13728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0307 22:59:48.906770   13728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0307 22:59:48.906770   13728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0307 22:59:48.906770   13728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0307 22:59:48.906770   13728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0307 22:59:48.906770   13728 command_runner.go:130] > ExecStart=
	I0307 22:59:48.906770   13728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0307 22:59:48.906770   13728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0307 22:59:48.906770   13728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0307 22:59:48.906770   13728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0307 22:59:48.906770   13728 command_runner.go:130] > LimitNOFILE=infinity
	I0307 22:59:48.906770   13728 command_runner.go:130] > LimitNPROC=infinity
	I0307 22:59:48.906770   13728 command_runner.go:130] > LimitCORE=infinity
	I0307 22:59:48.906770   13728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0307 22:59:48.906770   13728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0307 22:59:48.906770   13728 command_runner.go:130] > TasksMax=infinity
	I0307 22:59:48.906770   13728 command_runner.go:130] > TimeoutStartSec=0
	I0307 22:59:48.906770   13728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0307 22:59:48.906770   13728 command_runner.go:130] > Delegate=yes
	I0307 22:59:48.906770   13728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0307 22:59:48.906770   13728 command_runner.go:130] > KillMode=process
	I0307 22:59:48.906770   13728 command_runner.go:130] > [Install]
	I0307 22:59:48.906770   13728 command_runner.go:130] > WantedBy=multi-user.target
	I0307 22:59:48.918458   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 22:59:48.945581   13728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 22:59:48.990515   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 22:59:49.020946   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 22:59:49.041022   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:59:49.068914   13728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0307 22:59:49.080509   13728 ssh_runner.go:195] Run: which cri-dockerd
	I0307 22:59:49.086383   13728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0307 22:59:49.097937   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 22:59:49.120996   13728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 22:59:49.159146   13728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 22:59:49.394242   13728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 22:59:49.592238   13728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 22:59:49.592238   13728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 22:59:49.630858   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:59:49.844652   13728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:00:01.728099   13728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.8830979s)
	I0307 23:00:01.739468   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:00:01.779254   13728 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0307 23:00:01.817916   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:00:01.846353   13728 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:00:02.011659   13728 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:00:02.182350   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:00:02.353946   13728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:00:02.390320   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:00:02.419290   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:00:02.583967   13728 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:00:02.675170   13728 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:00:02.686244   13728 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:00:02.694851   13728 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0307 23:00:02.694851   13728 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0307 23:00:02.694851   13728 command_runner.go:130] > Device: 0,22	Inode: 1512        Links: 1
	I0307 23:00:02.694851   13728 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0307 23:00:02.694851   13728 command_runner.go:130] > Access: 2024-03-07 23:00:02.615507406 +0000
	I0307 23:00:02.694851   13728 command_runner.go:130] > Modify: 2024-03-07 23:00:02.615507406 +0000
	I0307 23:00:02.694851   13728 command_runner.go:130] > Change: 2024-03-07 23:00:02.618506425 +0000
	I0307 23:00:02.694851   13728 command_runner.go:130] >  Birth: -
	I0307 23:00:02.694851   13728 start.go:562] Will wait 60s for crictl version
	I0307 23:00:02.705948   13728 ssh_runner.go:195] Run: which crictl
	I0307 23:00:02.711649   13728 command_runner.go:130] > /usr/bin/crictl
	I0307 23:00:02.723208   13728 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:00:02.780149   13728 command_runner.go:130] > Version:  0.1.0
	I0307 23:00:02.780149   13728 command_runner.go:130] > RuntimeName:  docker
	I0307 23:00:02.780227   13728 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0307 23:00:02.780227   13728 command_runner.go:130] > RuntimeApiVersion:  v1
	I0307 23:00:02.780227   13728 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:00:02.787554   13728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:00:02.821184   13728 command_runner.go:130] > 24.0.7
	I0307 23:00:02.828485   13728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:00:02.855246   13728 command_runner.go:130] > 24.0.7
	I0307 23:00:02.859411   13728 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:00:02.859411   13728 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:00:02.863698   13728 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:00:02.863698   13728 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:00:02.864285   13728 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:00:02.864285   13728 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:00:02.866664   13728 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:00:02.866664   13728 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:00:02.876945   13728 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:00:02.884477   13728 command_runner.go:130] > 172.20.48.1	host.minikube.internal
	I0307 23:00:02.884663   13728 kubeadm.go:877] updating cluster {Name:functional-934300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:functional-934300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.27 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 23:00:02.884663   13728 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:00:02.893809   13728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0307 23:00:02.914415   13728 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 23:00:02.914415   13728 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 23:00:02.914415   13728 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 23:00:02.914415   13728 docker.go:615] Images already preloaded, skipping extraction
	I0307 23:00:02.923355   13728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:00:02.944668   13728 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0307 23:00:02.944768   13728 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0307 23:00:02.944768   13728 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0307 23:00:02.944768   13728 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0307 23:00:02.944768   13728 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0307 23:00:02.944845   13728 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0307 23:00:02.944845   13728 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0307 23:00:02.944869   13728 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 23:00:02.944919   13728 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 23:00:02.944919   13728 cache_images.go:84] Images are preloaded, skipping loading
	I0307 23:00:02.944919   13728 kubeadm.go:928] updating node { 172.20.58.27 8441 v1.28.4 docker true true} ...
	I0307 23:00:02.944919   13728 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-934300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.58.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-934300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:00:02.953177   13728 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 23:00:02.979938   13728 command_runner.go:130] > cgroupfs
	I0307 23:00:02.980296   13728 cni.go:84] Creating CNI manager for ""
	I0307 23:00:02.980296   13728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 23:00:02.980296   13728 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 23:00:02.980296   13728 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.58.27 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-934300 NodeName:functional-934300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.58.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.58.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 23:00:02.981028   13728 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.58.27
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-934300"
	  kubeletExtraArgs:
	    node-ip: 172.20.58.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.58.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 23:00:02.996543   13728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:00:03.011565   13728 command_runner.go:130] > kubeadm
	I0307 23:00:03.011565   13728 command_runner.go:130] > kubectl
	I0307 23:00:03.011565   13728 command_runner.go:130] > kubelet
	I0307 23:00:03.011565   13728 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 23:00:03.022871   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 23:00:03.039459   13728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0307 23:00:03.065863   13728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:00:03.090664   13728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0307 23:00:03.129863   13728 ssh_runner.go:195] Run: grep 172.20.58.27	control-plane.minikube.internal$ /etc/hosts
	I0307 23:00:03.135772   13728 command_runner.go:130] > 172.20.58.27	control-plane.minikube.internal
	I0307 23:00:03.146004   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:00:03.314345   13728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:00:03.334952   13728 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300 for IP: 172.20.58.27
	I0307 23:00:03.334979   13728 certs.go:194] generating shared ca certs ...
	I0307 23:00:03.334979   13728 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:00:03.335839   13728 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:00:03.335839   13728 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:00:03.336386   13728 certs.go:256] generating profile certs ...
	I0307 23:00:03.336498   13728 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.key
	I0307 23:00:03.337419   13728 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\apiserver.key.1c9ea5eb
	I0307 23:00:03.337419   13728 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\proxy-client.key
	I0307 23:00:03.337419   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:00:03.337419   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:00:03.337991   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:00:03.338210   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:00:03.338297   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:00:03.338297   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:00:03.338297   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:00:03.338297   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:00:03.339351   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:00:03.339676   13728 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:00:03.339793   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:00:03.340086   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:00:03.340233   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:00:03.340233   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:00:03.340233   13728 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:00:03.341527   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:00:03.341669   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:00:03.341669   13728 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:00:03.343181   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:00:03.380775   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:00:03.416513   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:00:03.446559   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:00:03.488399   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0307 23:00:03.524915   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 23:00:03.568525   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:00:03.681433   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 23:00:03.733300   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:00:03.786796   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:00:03.845935   13728 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:00:03.894484   13728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 23:00:03.938783   13728 ssh_runner.go:195] Run: openssl version
	I0307 23:00:03.947203   13728 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0307 23:00:03.957803   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:00:03.993817   13728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:00:04.002009   13728 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:00:04.002933   13728 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:00:04.014869   13728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:00:04.022370   13728 command_runner.go:130] > b5213941
	I0307 23:00:04.034052   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:00:04.064468   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:00:04.094219   13728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:00:04.100796   13728 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:00:04.103207   13728 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:00:04.114758   13728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:00:04.117116   13728 command_runner.go:130] > 51391683
	I0307 23:00:04.136907   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:00:04.183725   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:00:04.213054   13728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:00:04.221863   13728 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:00:04.221952   13728 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:00:04.233755   13728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:00:04.241456   13728 command_runner.go:130] > 3ec20f2e
	I0307 23:00:04.252027   13728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:00:04.280480   13728 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:00:04.282104   13728 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:00:04.282104   13728 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0307 23:00:04.282104   13728 command_runner.go:130] > Device: 8,1	Inode: 5242149     Links: 1
	I0307 23:00:04.282104   13728 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0307 23:00:04.287350   13728 command_runner.go:130] > Access: 2024-03-07 22:57:35.403018830 +0000
	I0307 23:00:04.287350   13728 command_runner.go:130] > Modify: 2024-03-07 22:57:35.403018830 +0000
	I0307 23:00:04.287350   13728 command_runner.go:130] > Change: 2024-03-07 22:57:35.403018830 +0000
	I0307 23:00:04.287350   13728 command_runner.go:130] >  Birth: 2024-03-07 22:57:35.403018830 +0000
	I0307 23:00:04.298728   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 23:00:04.306286   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.317299   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 23:00:04.324345   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.337027   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 23:00:04.344456   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.355759   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 23:00:04.362052   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.375164   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 23:00:04.385301   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.397171   13728 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 23:00:04.407756   13728 command_runner.go:130] > Certificate will not expire
	I0307 23:00:04.407756   13728 kubeadm.go:391] StartCluster: {Name:functional-934300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:functional-934300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.27 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:00:04.421648   13728 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 23:00:04.473455   13728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 23:00:04.492097   13728 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0307 23:00:04.492097   13728 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0307 23:00:04.492150   13728 command_runner.go:130] > /var/lib/minikube/etcd:
	I0307 23:00:04.492150   13728 command_runner.go:130] > member
	W0307 23:00:04.492150   13728 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 23:00:04.492150   13728 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 23:00:04.492150   13728 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 23:00:04.502757   13728 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 23:00:04.519400   13728 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 23:00:04.521462   13728 kubeconfig.go:125] found "functional-934300" server: "https://172.20.58.27:8441"
	I0307 23:00:04.522312   13728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:00:04.522312   13728 kapi.go:59] client config for functional-934300: &rest.Config{Host:"https://172.20.58.27:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-934300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-934300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 23:00:04.524330   13728 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 23:00:04.533474   13728 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 23:00:04.552140   13728 kubeadm.go:624] The running cluster does not require reconfiguration: 172.20.58.27
	I0307 23:00:04.552248   13728 kubeadm.go:1153] stopping kube-system containers ...
	I0307 23:00:04.561485   13728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 23:00:04.596371   13728 command_runner.go:130] > e70fd53104d3
	I0307 23:00:04.601798   13728 command_runner.go:130] > 927d7d95cf90
	I0307 23:00:04.601798   13728 command_runner.go:130] > c0d47fefb077
	I0307 23:00:04.601798   13728 command_runner.go:130] > d7fc57efa5ea
	I0307 23:00:04.601889   13728 command_runner.go:130] > 963ed656114a
	I0307 23:00:04.601889   13728 command_runner.go:130] > 9f88227834c1
	I0307 23:00:04.601889   13728 command_runner.go:130] > 94ca0d6ca93b
	I0307 23:00:04.601889   13728 command_runner.go:130] > de47de55e13b
	I0307 23:00:04.601889   13728 command_runner.go:130] > da5f7fe51c54
	I0307 23:00:04.601889   13728 command_runner.go:130] > 0a2e6b683c32
	I0307 23:00:04.601889   13728 command_runner.go:130] > da6887f6c3fe
	I0307 23:00:04.601889   13728 command_runner.go:130] > 14b951d3a571
	I0307 23:00:04.601889   13728 command_runner.go:130] > a0710c463851
	I0307 23:00:04.601971   13728 command_runner.go:130] > 88f7679212a0
	I0307 23:00:04.601971   13728 command_runner.go:130] > 4d39f2e1578c
	I0307 23:00:04.601971   13728 command_runner.go:130] > f06a13b77d9e
	I0307 23:00:04.601971   13728 command_runner.go:130] > 6b45c7b533c6
	I0307 23:00:04.601971   13728 command_runner.go:130] > 30b7f8914aac
	I0307 23:00:04.601971   13728 command_runner.go:130] > 868a292bf57f
	I0307 23:00:04.601971   13728 command_runner.go:130] > 3a6c8ed8066f
	I0307 23:00:04.602074   13728 command_runner.go:130] > 0f950d9a9745
	I0307 23:00:04.602100   13728 command_runner.go:130] > b4f766140d92
	I0307 23:00:04.602100   13728 docker.go:483] Stopping containers: [e70fd53104d3 927d7d95cf90 c0d47fefb077 d7fc57efa5ea 963ed656114a 9f88227834c1 94ca0d6ca93b de47de55e13b da5f7fe51c54 0a2e6b683c32 da6887f6c3fe 14b951d3a571 a0710c463851 88f7679212a0 4d39f2e1578c f06a13b77d9e 6b45c7b533c6 30b7f8914aac 868a292bf57f 3a6c8ed8066f 0f950d9a9745 b4f766140d92]
	I0307 23:00:04.611750   13728 ssh_runner.go:195] Run: docker stop e70fd53104d3 927d7d95cf90 c0d47fefb077 d7fc57efa5ea 963ed656114a 9f88227834c1 94ca0d6ca93b de47de55e13b da5f7fe51c54 0a2e6b683c32 da6887f6c3fe 14b951d3a571 a0710c463851 88f7679212a0 4d39f2e1578c f06a13b77d9e 6b45c7b533c6 30b7f8914aac 868a292bf57f 3a6c8ed8066f 0f950d9a9745 b4f766140d92
	I0307 23:00:05.384009   13728 command_runner.go:130] > e70fd53104d3
	I0307 23:00:05.384009   13728 command_runner.go:130] > 927d7d95cf90
	I0307 23:00:05.384009   13728 command_runner.go:130] > c0d47fefb077
	I0307 23:00:05.384009   13728 command_runner.go:130] > d7fc57efa5ea
	I0307 23:00:05.384009   13728 command_runner.go:130] > 963ed656114a
	I0307 23:00:05.384009   13728 command_runner.go:130] > 9f88227834c1
	I0307 23:00:05.384009   13728 command_runner.go:130] > 94ca0d6ca93b
	I0307 23:00:05.384009   13728 command_runner.go:130] > de47de55e13b
	I0307 23:00:05.384009   13728 command_runner.go:130] > da5f7fe51c54
	I0307 23:00:05.384009   13728 command_runner.go:130] > 0a2e6b683c32
	I0307 23:00:05.384009   13728 command_runner.go:130] > da6887f6c3fe
	I0307 23:00:05.384009   13728 command_runner.go:130] > 14b951d3a571
	I0307 23:00:05.384009   13728 command_runner.go:130] > a0710c463851
	I0307 23:00:05.384009   13728 command_runner.go:130] > 88f7679212a0
	I0307 23:00:05.384009   13728 command_runner.go:130] > 4d39f2e1578c
	I0307 23:00:05.384009   13728 command_runner.go:130] > f06a13b77d9e
	I0307 23:00:05.384009   13728 command_runner.go:130] > 6b45c7b533c6
	I0307 23:00:05.384009   13728 command_runner.go:130] > 30b7f8914aac
	I0307 23:00:05.384009   13728 command_runner.go:130] > 868a292bf57f
	I0307 23:00:05.384009   13728 command_runner.go:130] > 3a6c8ed8066f
	I0307 23:00:05.384009   13728 command_runner.go:130] > 0f950d9a9745
	I0307 23:00:05.384009   13728 command_runner.go:130] > b4f766140d92
	I0307 23:00:05.399068   13728 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0307 23:00:05.465194   13728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 23:00:05.485754   13728 command_runner.go:130] > -rw------- 1 root root 5639 Mar  7 22:57 /etc/kubernetes/admin.conf
	I0307 23:00:05.485817   13728 command_runner.go:130] > -rw------- 1 root root 5656 Mar  7 22:57 /etc/kubernetes/controller-manager.conf
	I0307 23:00:05.485817   13728 command_runner.go:130] > -rw------- 1 root root 2007 Mar  7 22:57 /etc/kubernetes/kubelet.conf
	I0307 23:00:05.485817   13728 command_runner.go:130] > -rw------- 1 root root 5600 Mar  7 22:57 /etc/kubernetes/scheduler.conf
	I0307 23:00:05.485901   13728 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5639 Mar  7 22:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar  7 22:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar  7 22:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar  7 22:57 /etc/kubernetes/scheduler.conf
	
	I0307 23:00:05.495543   13728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0307 23:00:05.514061   13728 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0307 23:00:05.524964   13728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0307 23:00:05.544456   13728 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0307 23:00:05.557878   13728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0307 23:00:05.574782   13728 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 23:00:05.586896   13728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 23:00:05.619951   13728 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0307 23:00:05.671829   13728 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0307 23:00:05.684642   13728 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 23:00:05.716282   13728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 23:00:05.738345   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:05.847469   13728 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 23:00:05.847522   13728 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0307 23:00:05.847572   13728 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0307 23:00:05.847572   13728 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0307 23:00:05.847572   13728 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0307 23:00:05.847572   13728 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0307 23:00:05.847622   13728 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0307 23:00:05.847622   13728 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0307 23:00:05.847652   13728 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0307 23:00:05.847652   13728 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0307 23:00:05.847684   13728 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0307 23:00:05.847684   13728 command_runner.go:130] > [certs] Using the existing "sa" key
	I0307 23:00:05.847738   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:06.528740   13728 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 23:00:06.528817   13728 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0307 23:00:06.528817   13728 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0307 23:00:06.528817   13728 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 23:00:06.528817   13728 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 23:00:06.528879   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:06.791355   13728 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 23:00:06.798679   13728 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 23:00:06.798679   13728 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0307 23:00:06.798760   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:06.887780   13728 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 23:00:06.887780   13728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 23:00:06.887780   13728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 23:00:06.887780   13728 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 23:00:06.887780   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:06.985595   13728 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 23:00:06.985595   13728 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:00:07.000808   13728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:00:07.503167   13728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:00:08.002969   13728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:00:08.029662   13728 command_runner.go:130] > 7170
	I0307 23:00:08.029760   13728 api_server.go:72] duration metric: took 1.0441547s to wait for apiserver process to appear ...
	I0307 23:00:08.029760   13728 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:00:08.029891   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:11.635936   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0307 23:00:11.644907   13728 api_server.go:103] status: https://172.20.58.27:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0307 23:00:11.644953   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:11.735554   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0307 23:00:11.735605   13728 api_server.go:103] status: https://172.20.58.27:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0307 23:00:12.044969   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:12.052172   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0307 23:00:12.053416   13728 api_server.go:103] status: https://172.20.58.27:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0307 23:00:12.551870   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:12.562314   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0307 23:00:12.562380   13728 api_server.go:103] status: https://172.20.58.27:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0307 23:00:13.041891   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:13.048473   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 200:
	ok
	I0307 23:00:13.050263   13728 round_trippers.go:463] GET https://172.20.58.27:8441/version
	I0307 23:00:13.050263   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.050263   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.050263   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.060073   13728 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 23:00:13.061384   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.061384   13728 round_trippers.go:580]     Content-Length: 264
	I0307 23:00:13.061384   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.061384   13728 round_trippers.go:580]     Audit-Id: 3e254030-ed9b-4978-a337-685c59f3c09e
	I0307 23:00:13.061384   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.061384   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.061384   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.061384   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.061495   13728 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 23:00:13.061626   13728 api_server.go:141] control plane version: v1.28.4
	I0307 23:00:13.061626   13728 api_server.go:131] duration metric: took 5.0318183s to wait for apiserver health ...
	I0307 23:00:13.061626   13728 cni.go:84] Creating CNI manager for ""
	I0307 23:00:13.061626   13728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 23:00:13.064552   13728 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0307 23:00:13.078658   13728 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0307 23:00:13.093737   13728 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0307 23:00:13.121310   13728 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:00:13.121540   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:13.121565   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.121565   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.121565   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.127854   13728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:00:13.128070   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.128070   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.128070   13728 round_trippers.go:580]     Audit-Id: 49edc1c7-9304-4e62-a8a2-2285f77ba003
	I0307 23:00:13.128134   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.128134   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.128134   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.128134   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.129057   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"550"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"519","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49610 chars]
	I0307 23:00:13.134121   13728 system_pods.go:59] 7 kube-system pods found
	I0307 23:00:13.134121   13728 system_pods.go:61] "coredns-5dd5756b68-qckb6" [1d70d200-b84d-406f-a812-aeada0591d68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0307 23:00:13.134121   13728 system_pods.go:61] "etcd-functional-934300" [dcc6bd79-f9bb-4acd-a050-37d15b5e949c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0307 23:00:13.134121   13728 system_pods.go:61] "kube-apiserver-functional-934300" [89292fed-5152-47c0-b3fa-44af37af8bc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0307 23:00:13.134121   13728 system_pods.go:61] "kube-controller-manager-functional-934300" [04393d46-35b0-4807-acd9-d46af0a8de3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0307 23:00:13.134121   13728 system_pods.go:61] "kube-proxy-ng97v" [e5408fb9-13f3-46d1-9509-d0c312f0c175] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0307 23:00:13.134121   13728 system_pods.go:61] "kube-scheduler-functional-934300" [a83c5c0c-4e51-4bf5-b002-5c4e59c782d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0307 23:00:13.134121   13728 system_pods.go:61] "storage-provisioner" [c743467f-e104-4404-b662-be573f6ec4a0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0307 23:00:13.134121   13728 system_pods.go:74] duration metric: took 12.7787ms to wait for pod list to return data ...
	I0307 23:00:13.134121   13728 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:00:13.134121   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes
	I0307 23:00:13.134121   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.134121   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.134121   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.139372   13728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:00:13.139999   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.139999   13728 round_trippers.go:580]     Audit-Id: a9cbc2ea-f9da-40d9-a92a-9c4cf1165606
	I0307 23:00:13.140070   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.140070   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.140101   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.140101   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.140101   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.140219   13728 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"550"},"items":[{"metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0307 23:00:13.141018   13728 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:00:13.141103   13728 node_conditions.go:123] node cpu capacity is 2
	I0307 23:00:13.141103   13728 node_conditions.go:105] duration metric: took 6.982ms to run NodePressure ...
	I0307 23:00:13.141156   13728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0307 23:00:13.391054   13728 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0307 23:00:13.391129   13728 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0307 23:00:13.391129   13728 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0307 23:00:13.391351   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0307 23:00:13.391404   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.391404   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.391404   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.391707   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:13.391707   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.395057   13728 round_trippers.go:580]     Audit-Id: 29112f12-65ee-43aa-8b19-0077d521b8c5
	I0307 23:00:13.395057   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.395057   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.395057   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.395057   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.395057   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.395861   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"552"},"items":[{"metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29415 chars]
	I0307 23:00:13.397279   13728 kubeadm.go:733] kubelet initialised
	I0307 23:00:13.397279   13728 kubeadm.go:734] duration metric: took 6.1497ms waiting for restarted kubelet to initialise ...
	I0307 23:00:13.397350   13728 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:00:13.397469   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:13.397469   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.397469   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.397469   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.398969   13728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:00:13.401482   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.401482   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.401482   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.401482   13728 round_trippers.go:580]     Audit-Id: ab456018-b8dd-4e1a-9c7e-e48183ae7cda
	I0307 23:00:13.401482   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.401482   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.401482   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.402453   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"552"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"519","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49610 chars]
	I0307 23:00:13.404045   13728 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:13.404749   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:13.404777   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.404810   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.404810   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.408569   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:13.417422   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.417422   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.417422   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.417540   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.417540   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.417540   13728 round_trippers.go:580]     Audit-Id: 8641210b-6a22-4d59-816a-53a488c30fa1
	I0307 23:00:13.417540   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.417748   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"519","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6338 chars]
	I0307 23:00:13.418153   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:13.418153   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.418153   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.418153   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.422311   13728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:00:13.422473   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.422519   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.422519   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.422519   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.422555   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.422555   13728 round_trippers.go:580]     Audit-Id: d19d1066-e442-4669-a0bf-0dd85003ec2e
	I0307 23:00:13.422555   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.422731   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:13.919049   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:13.919049   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.919049   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.919049   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.919738   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:13.931933   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.931933   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.931933   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.931933   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.931933   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.931933   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.931933   13728 round_trippers.go:580]     Audit-Id: 0889d73f-28e1-401a-8cb9-4c47222dfc30
	I0307 23:00:13.932178   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:13.932931   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:13.932931   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:13.932931   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:13.932931   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:13.935490   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:13.935490   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:13.935490   13728 round_trippers.go:580]     Audit-Id: be1a98ce-6214-4329-86a1-047441d90716
	I0307 23:00:13.935490   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:13.935490   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:13.935490   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:13.935490   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:13.935490   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:13 GMT
	I0307 23:00:13.936108   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:14.417576   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:14.417929   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:14.417929   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:14.417929   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:14.421480   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:14.421480   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:14.421480   13728 round_trippers.go:580]     Audit-Id: 8810a2f0-6c51-4378-923d-9c8f63e2ab41
	I0307 23:00:14.421480   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:14.421480   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:14.421480   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:14.421480   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:14.421480   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:14 GMT
	I0307 23:00:14.421811   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:14.422276   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:14.422276   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:14.422276   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:14.422276   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:14.426395   13728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:00:14.426395   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:14.426931   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:14.426931   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:14.426931   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:14.426931   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:14 GMT
	I0307 23:00:14.426931   13728 round_trippers.go:580]     Audit-Id: 97f8faf3-6264-4a24-bf51-5b2a38b3f982
	I0307 23:00:14.426931   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:14.427072   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:14.919416   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:14.919416   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:14.919416   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:14.919416   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:14.919959   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:14.923164   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:14.923164   13728 round_trippers.go:580]     Audit-Id: ad8a789b-8ca7-4b5e-8815-ee9cbaff869d
	I0307 23:00:14.923164   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:14.923164   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:14.923164   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:14.923164   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:14.923164   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:14 GMT
	I0307 23:00:14.923364   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:14.924116   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:14.924116   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:14.924116   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:14.924116   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:14.927937   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:14.927937   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:14.927983   13728 round_trippers.go:580]     Audit-Id: e272e11c-70e7-4bf0-9ba8-a09ebb4b7b1f
	I0307 23:00:14.927983   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:14.927983   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:14.927983   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:14.927983   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:14.927983   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:14 GMT
	I0307 23:00:14.928868   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:15.410219   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:15.410470   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:15.410470   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:15.410470   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:15.410774   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:15.410774   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:15.414686   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:15.414686   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:15 GMT
	I0307 23:00:15.414686   13728 round_trippers.go:580]     Audit-Id: e0de9f2d-f0b0-4dcc-9e11-13ebf46e8ace
	I0307 23:00:15.414686   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:15.414686   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:15.414686   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:15.415070   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:15.415365   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:15.415365   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:15.415365   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:15.415365   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:15.420790   13728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:00:15.420890   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:15.420954   13728 round_trippers.go:580]     Audit-Id: 51cc7ac0-8429-4ddc-b0dc-bf09ea290a4b
	I0307 23:00:15.420954   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:15.420995   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:15.421032   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:15.421032   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:15.421082   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:15 GMT
	I0307 23:00:15.421268   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:15.421999   13728 pod_ready.go:102] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"False"
	I0307 23:00:15.912790   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:15.913024   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:15.913024   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:15.913024   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:15.913349   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:15.913349   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:15.913349   13728 round_trippers.go:580]     Audit-Id: 635b074e-701e-4d19-97d8-16945ce00b0d
	I0307 23:00:15.913349   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:15.913349   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:15.913349   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:15.913349   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:15.913349   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:15 GMT
	I0307 23:00:15.916908   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:15.917093   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:15.917093   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:15.917093   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:15.917093   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:15.920849   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:15.920938   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:15.920938   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:15.920938   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:15 GMT
	I0307 23:00:15.920938   13728 round_trippers.go:580]     Audit-Id: 061ed20f-ac56-4372-b2c4-6fe3b2352148
	I0307 23:00:15.920938   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:15.920938   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:15.920938   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:15.921145   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:16.415700   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:16.415700   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:16.415700   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:16.415700   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:16.416231   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:16.416231   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:16.416231   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:16.419319   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:16.419319   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:16.419319   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:16.419319   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:16 GMT
	I0307 23:00:16.419319   13728 round_trippers.go:580]     Audit-Id: f51a0f88-0079-4906-b478-a82ba82363f3
	I0307 23:00:16.419585   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:16.420179   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:16.420179   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:16.420273   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:16.420273   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:16.420914   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:16.420914   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:16.420914   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:16.420914   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:16 GMT
	I0307 23:00:16.420914   13728 round_trippers.go:580]     Audit-Id: 3a39325e-1a6b-4ed9-9baa-a5c975c3f81c
	I0307 23:00:16.420914   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:16.420914   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:16.423498   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:16.423765   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:16.913616   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:16.913729   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:16.913729   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:16.913729   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:16.914061   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:16.917815   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:16.917815   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:16.917815   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:16.917815   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:16 GMT
	I0307 23:00:16.917815   13728 round_trippers.go:580]     Audit-Id: 4b21495e-63b2-4585-b5a8-3243b8b812ed
	I0307 23:00:16.917815   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:16.917815   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:16.918055   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:16.918211   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:16.918738   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:16.918738   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:16.918738   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:16.919424   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:16.921724   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:16.921724   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:16.921802   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:16.921848   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:16.921848   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:16 GMT
	I0307 23:00:16.921848   13728 round_trippers.go:580]     Audit-Id: ba47d425-da24-40d8-8944-10eb5cb2e0bc
	I0307 23:00:16.921848   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:16.921848   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:17.417541   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:17.417541   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:17.417541   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:17.417541   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:17.422154   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:17.422154   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:17.422154   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:17.422154   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:17.422154   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:17.422154   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:17.422154   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:17 GMT
	I0307 23:00:17.422154   13728 round_trippers.go:580]     Audit-Id: 12d67bde-2599-44a9-b11b-cb8e2bdb69f1
	I0307 23:00:17.422360   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:17.423174   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:17.423174   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:17.423260   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:17.423260   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:17.423890   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:17.426057   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:17.426122   13728 round_trippers.go:580]     Audit-Id: b1947b0d-a6d4-455b-8163-4b8c28963c9e
	I0307 23:00:17.426122   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:17.426122   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:17.426122   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:17.426122   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:17.426122   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:17 GMT
	I0307 23:00:17.426122   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:17.426833   13728 pod_ready.go:102] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"False"
	I0307 23:00:17.917560   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:17.917560   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:17.917560   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:17.917560   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:17.921256   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:17.921358   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:17.921358   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:17.921358   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:17.921358   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:17.921417   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:17 GMT
	I0307 23:00:17.921417   13728 round_trippers.go:580]     Audit-Id: dba730f6-cb33-4ebd-910f-bea04df2598a
	I0307 23:00:17.921417   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:17.921489   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:17.922237   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:17.922297   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:17.922297   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:17.922297   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:17.922467   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:17.925098   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:17.925098   13728 round_trippers.go:580]     Audit-Id: 346c4013-7cfd-45ed-869f-b70d047f562c
	I0307 23:00:17.925098   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:17.925098   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:17.925098   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:17.925168   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:17.925168   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:17 GMT
	I0307 23:00:17.925492   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:18.415220   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:18.415289   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:18.415289   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:18.415289   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:18.415963   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:18.419092   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:18.419092   13728 round_trippers.go:580]     Audit-Id: af749eb4-8fa8-44f0-800a-56d678e3487a
	I0307 23:00:18.419092   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:18.419092   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:18.419092   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:18.419092   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:18.419092   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:18 GMT
	I0307 23:00:18.419092   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:18.419781   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:18.419781   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:18.419781   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:18.419781   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:18.420381   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:18.420381   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:18.420381   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:18 GMT
	I0307 23:00:18.420381   13728 round_trippers.go:580]     Audit-Id: db2f6d35-890e-4c10-a2a0-11d2e9b33a85
	I0307 23:00:18.423594   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:18.423594   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:18.423594   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:18.423594   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:18.424397   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:18.925201   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:18.925201   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:18.925201   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:18.925377   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:18.925630   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:18.929734   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:18.929734   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:18 GMT
	I0307 23:00:18.929734   13728 round_trippers.go:580]     Audit-Id: 4db1bd67-a76d-45e6-9fd9-f944a069aefb
	I0307 23:00:18.929734   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:18.929734   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:18.929734   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:18.929734   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:18.929887   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:18.930615   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:18.930615   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:18.930615   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:18.930736   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:18.931051   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:18.931051   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:18.931051   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:18.931051   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:18.931051   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:18.931051   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:18.931051   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:18 GMT
	I0307 23:00:18.931051   13728 round_trippers.go:580]     Audit-Id: 795764f2-265f-4f0b-bd57-c18d3388fbbe
	I0307 23:00:18.933921   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:19.410609   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:19.410689   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:19.410689   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:19.410689   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:19.411420   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:19.416234   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:19.416234   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:19.416234   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:19.416234   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:19 GMT
	I0307 23:00:19.416234   13728 round_trippers.go:580]     Audit-Id: e6a4e7d5-3661-437b-9dda-1ecdfecc20fc
	I0307 23:00:19.416234   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:19.416234   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:19.416569   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:19.416942   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:19.416942   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:19.416942   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:19.416942   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:19.417648   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:19.417648   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:19.420313   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:19.420313   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:19.420313   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:19 GMT
	I0307 23:00:19.420313   13728 round_trippers.go:580]     Audit-Id: ad49da38-bbd6-40d8-891d-97251aeb511f
	I0307 23:00:19.420313   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:19.420313   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:19.420589   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:19.907750   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:19.907982   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:19.907982   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:19.907982   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:19.908399   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:19.908399   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:19.908399   13728 round_trippers.go:580]     Audit-Id: 680aad01-6542-4ab3-bc98-04dc06db8b1d
	I0307 23:00:19.913062   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:19.913062   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:19.913062   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:19.913062   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:19.913062   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:19 GMT
	I0307 23:00:19.913261   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:19.913967   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:19.914004   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:19.914004   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:19.914047   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:19.914675   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:19.914675   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:19.914675   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:19 GMT
	I0307 23:00:19.914675   13728 round_trippers.go:580]     Audit-Id: fff5cced-79a8-4a0e-90df-ed81dfba1c02
	I0307 23:00:19.917511   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:19.917511   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:19.917511   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:19.917543   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:19.917848   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:19.918083   13728 pod_ready.go:102] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"False"
	I0307 23:00:20.407552   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:20.407808   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:20.407808   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:20.407808   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:20.408348   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:20.411557   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:20.411557   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:20 GMT
	I0307 23:00:20.411557   13728 round_trippers.go:580]     Audit-Id: 401f0182-bb80-4722-8ba3-ffec1c7fd066
	I0307 23:00:20.411557   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:20.411557   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:20.411557   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:20.411557   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:20.411753   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:20.412502   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:20.412502   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:20.412502   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:20.412502   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:20.413312   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:20.413312   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:20.413312   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:20.413312   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:20.413312   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:20 GMT
	I0307 23:00:20.413312   13728 round_trippers.go:580]     Audit-Id: 0eea8f01-4380-4d50-b7ca-fcf227d3d903
	I0307 23:00:20.413312   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:20.413312   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:20.413312   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:20.904902   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:20.904902   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:20.904967   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:20.904967   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:20.905148   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:20.909065   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:20.909065   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:20.909065   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:20.909065   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:20.909065   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:20 GMT
	I0307 23:00:20.909065   13728 round_trippers.go:580]     Audit-Id: 99c33bb0-a548-4e86-a2e4-3f71a2ece634
	I0307 23:00:20.909190   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:20.909384   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:20.910299   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:20.910366   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:20.910366   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:20.910366   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:20.910520   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:20.910520   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:20.910520   13728 round_trippers.go:580]     Audit-Id: 9add31e8-39b1-4f4b-8009-5563ed191f6c
	I0307 23:00:20.910520   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:20.916207   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:20.916207   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:20.916207   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:20.916207   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:20 GMT
	I0307 23:00:20.916588   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:21.408021   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:21.408288   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:21.408288   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:21.408288   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:21.408898   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:21.411720   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:21.411720   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:21.411720   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:21.411720   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:21.411804   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:21.411804   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:21 GMT
	I0307 23:00:21.411804   13728 round_trippers.go:580]     Audit-Id: 32b687ad-3be3-4424-b075-f9fce1685654
	I0307 23:00:21.412047   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:21.413118   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:21.413195   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:21.413195   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:21.413195   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:21.413396   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:21.413396   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:21.413396   13728 round_trippers.go:580]     Audit-Id: 340db798-3125-4aea-85a5-41b1654a8c5e
	I0307 23:00:21.413396   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:21.413396   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:21.413396   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:21.413396   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:21.413396   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:21 GMT
	I0307 23:00:21.416581   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:21.906271   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:21.906494   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:21.906494   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:21.906494   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:21.906757   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:21.906757   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:21.906757   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:21.906757   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:21 GMT
	I0307 23:00:21.916564   13728 round_trippers.go:580]     Audit-Id: 95611ed8-180c-463d-8e41-68605f65716d
	I0307 23:00:21.916564   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:21.916564   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:21.916564   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:21.916722   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"553","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6383 chars]
	I0307 23:00:21.917566   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:21.917566   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:21.917639   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:21.917639   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:21.918362   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:21.918362   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:21.918362   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:21 GMT
	I0307 23:00:21.918362   13728 round_trippers.go:580]     Audit-Id: 8cae3a56-a25d-4c31-94e4-8bd43be3e7cd
	I0307 23:00:21.918362   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:21.918362   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:21.918362   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:21.918362   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:21.920979   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:21.921390   13728 pod_ready.go:102] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"False"
	I0307 23:00:22.406569   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:22.406659   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.406659   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.406659   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.407051   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:22.407051   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.411148   13728 round_trippers.go:580]     Audit-Id: 69a335a5-29a4-48bb-9be5-d9aa6933070d
	I0307 23:00:22.411148   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.411148   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.411148   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.411148   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.411148   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.411355   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"559","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I0307 23:00:22.412084   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:22.412084   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.412084   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.412084   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.412389   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:22.412389   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.412389   13728 round_trippers.go:580]     Audit-Id: 879d3eae-683e-49b6-baf7-2f6bb5e7e385
	I0307 23:00:22.412389   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.412389   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.415826   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.415826   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.415826   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.416124   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:22.416573   13728 pod_ready.go:92] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:22.416573   13728 pod_ready.go:81] duration metric: took 9.0124431s for pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:22.416573   13728 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:22.416780   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:22.416780   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.416780   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.416780   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.417021   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:22.417021   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.417021   13728 round_trippers.go:580]     Audit-Id: 8332a016-151e-49dc-9241-2452a5f16737
	I0307 23:00:22.417021   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.417021   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.417021   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.417021   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.419807   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.420060   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:22.420179   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:22.420179   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.420179   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.420179   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.422207   13728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 23:00:22.422207   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.422207   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.422207   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.422207   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.422207   13728 round_trippers.go:580]     Audit-Id: 49b19d80-aedc-4eff-a63b-144afc75e92c
	I0307 23:00:22.422207   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.422207   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.424553   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:22.918568   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:22.918889   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.918889   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.918889   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.919421   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:22.923420   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.923420   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.923420   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.923420   13728 round_trippers.go:580]     Audit-Id: e8ee7a43-f59b-43cf-b53c-39a0de10ba71
	I0307 23:00:22.923420   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.923420   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.923420   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.923656   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:22.924373   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:22.924433   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:22.924433   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:22.924433   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:22.924692   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:22.927322   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:22.927468   13728 round_trippers.go:580]     Audit-Id: 2ddef810-f790-421f-919b-067fd3da286b
	I0307 23:00:22.927501   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:22.927501   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:22.927541   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:22.927541   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:22.927541   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:22 GMT
	I0307 23:00:22.927541   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:23.421672   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:23.421749   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:23.421749   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:23.421801   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:23.422013   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:23.425654   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:23.425654   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:23.425654   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:23.425654   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:23.425756   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:23.425756   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:23 GMT
	I0307 23:00:23.425756   13728 round_trippers.go:580]     Audit-Id: 4b299467-0456-4b9c-b650-62749bdd40d5
	I0307 23:00:23.425838   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:23.426524   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:23.426596   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:23.426596   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:23.426596   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:23.426739   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:23.426739   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:23.429493   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:23.429493   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:23.429493   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:23 GMT
	I0307 23:00:23.429493   13728 round_trippers.go:580]     Audit-Id: 472e6c89-c8f4-4f47-88d8-2ad9c59ab702
	I0307 23:00:23.429493   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:23.429493   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:23.429979   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:23.922403   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:23.922403   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:23.922403   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:23.922403   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:23.922942   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:23.922942   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:23.925587   13728 round_trippers.go:580]     Audit-Id: 3a94473d-63da-4473-acb7-f04e1f3c33c2
	I0307 23:00:23.925587   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:23.925587   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:23.925587   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:23.925587   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:23.925587   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:23 GMT
	I0307 23:00:23.925859   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:23.926387   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:23.926387   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:23.926387   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:23.926387   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:23.929584   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:23.929584   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:23.929584   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:23.929584   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:23.929584   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:23.929584   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:23 GMT
	I0307 23:00:23.929584   13728 round_trippers.go:580]     Audit-Id: 441ab3bd-b3a6-41cc-869c-d222bf3ced22
	I0307 23:00:23.929584   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:23.929584   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:24.425897   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:24.425897   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:24.426301   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:24.426301   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:24.426839   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:24.429577   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:24.429577   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:24.429676   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:24.429676   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:24 GMT
	I0307 23:00:24.429676   13728 round_trippers.go:580]     Audit-Id: 2760ffee-5180-4085-abc0-79a9f8bcad90
	I0307 23:00:24.429676   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:24.429676   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:24.429933   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:24.430274   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:24.430274   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:24.430274   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:24.430274   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:24.433978   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:24.433978   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:24.433978   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:24.433978   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:24.433978   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:24.433978   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:24 GMT
	I0307 23:00:24.433978   13728 round_trippers.go:580]     Audit-Id: 645d85a1-0642-42ec-ae23-96158913d460
	I0307 23:00:24.433978   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:24.434689   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:24.434942   13728 pod_ready.go:102] pod "etcd-functional-934300" in "kube-system" namespace has status "Ready":"False"
	I0307 23:00:24.918796   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:24.919054   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:24.919116   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:24.919116   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:24.919366   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:24.922511   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:24.922511   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:24.922511   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:24 GMT
	I0307 23:00:24.922511   13728 round_trippers.go:580]     Audit-Id: 58883e70-32e2-409b-a351-b99d9b45b5b4
	I0307 23:00:24.922511   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:24.922511   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:24.922511   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:24.922770   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:24.922996   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:24.922996   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:24.922996   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:24.922996   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:24.923771   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:24.926687   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:24.926687   13728 round_trippers.go:580]     Audit-Id: 1180abaf-f8b6-4672-9467-02d1819790a6
	I0307 23:00:24.926687   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:24.926687   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:24.926687   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:24.926687   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:24.926687   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:24 GMT
	I0307 23:00:24.926994   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:25.417702   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:25.417777   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:25.417777   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:25.417777   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:25.418083   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:25.421843   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:25.421843   13728 round_trippers.go:580]     Audit-Id: 3db3a0b5-c60a-44b4-ae13-ace3685d4124
	I0307 23:00:25.421843   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:25.421843   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:25.421969   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:25.421969   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:25.421969   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:25 GMT
	I0307 23:00:25.422022   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:25.422634   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:25.422634   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:25.422634   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:25.422634   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:25.427842   13728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:00:25.427842   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:25.427917   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:25.427917   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:25.427917   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:25.427917   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:25.427917   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:25 GMT
	I0307 23:00:25.427917   13728 round_trippers.go:580]     Audit-Id: eca61fc3-c2df-44b6-b2f8-f71ee74471b8
	I0307 23:00:25.428032   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:25.932443   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:25.932509   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:25.932509   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:25.932509   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:25.936008   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:25.936074   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:25.936074   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:25.936074   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:25.936074   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:25.936074   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:25.936074   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:25 GMT
	I0307 23:00:25.936074   13728 round_trippers.go:580]     Audit-Id: 6a4b906c-554d-48e9-99ca-aa94e07c1350
	I0307 23:00:25.936074   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"522","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6093 chars]
	I0307 23:00:25.936817   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:25.936817   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:25.936817   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:25.936817   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:25.937337   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:25.937337   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:25.940197   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:25.940197   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:25 GMT
	I0307 23:00:25.940197   13728 round_trippers.go:580]     Audit-Id: 64e256eb-aead-4894-a52b-67051092000b
	I0307 23:00:25.940197   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:25.940197   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:25.940197   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:25.940349   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.419363   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:26.419399   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.419399   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.419447   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.423789   13728 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:00:26.423868   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.423868   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.423868   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.423868   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.423868   13728 round_trippers.go:580]     Audit-Id: 0fb4d42c-625c-45e1-853d-99b843bd9653
	I0307 23:00:26.423868   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.423868   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.424119   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"571","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5869 chars]
	I0307 23:00:26.424440   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.424440   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.424440   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.424440   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.427641   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:26.427641   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.427641   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.427641   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.427641   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.427641   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.427641   13728 round_trippers.go:580]     Audit-Id: 6ad83801-4429-4719-b2df-82afde7e4f18
	I0307 23:00:26.427641   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.427641   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.429260   13728 pod_ready.go:92] pod "etcd-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:26.429290   13728 pod_ready.go:81] duration metric: took 4.0126796s for pod "etcd-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.429290   13728 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.429290   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-934300
	I0307 23:00:26.429290   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.429290   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.429290   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.431665   13728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 23:00:26.432496   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.432496   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.432545   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.432545   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.432545   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.432545   13728 round_trippers.go:580]     Audit-Id: 5043c334-4c35-42c7-8fb6-33b29806a043
	I0307 23:00:26.432545   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.432861   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-934300","namespace":"kube-system","uid":"89292fed-5152-47c0-b3fa-44af37af8bc1","resourceVersion":"557","creationTimestamp":"2024-03-07T22:57:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.58.27:8441","kubernetes.io/config.hash":"dcbd74f28184f2ba6b30434803b70bb0","kubernetes.io/config.mirror":"dcbd74f28184f2ba6b30434803b70bb0","kubernetes.io/config.seen":"2024-03-07T22:57:38.166828307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7850 chars]
	I0307 23:00:26.433451   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.433699   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.433699   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.433699   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.433912   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.436275   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.436275   13728 round_trippers.go:580]     Audit-Id: 2ae8c4fb-f11c-4126-9705-01a95937a2e0
	I0307 23:00:26.436426   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.436426   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.436426   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.436426   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.436426   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.436426   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.437023   13728 pod_ready.go:92] pod "kube-apiserver-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:26.437078   13728 pod_ready.go:81] duration metric: took 7.7878ms for pod "kube-apiserver-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.437078   13728 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.437218   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-934300
	I0307 23:00:26.437347   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.437389   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.437389   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.441002   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:26.441030   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.441066   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.441066   13728 round_trippers.go:580]     Audit-Id: d1e34e7b-467a-4982-9ae2-d5fdc6d7dbdb
	I0307 23:00:26.441066   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.441119   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.441119   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.441153   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.441275   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-934300","namespace":"kube-system","uid":"04393d46-35b0-4807-acd9-d46af0a8de3c","resourceVersion":"562","creationTimestamp":"2024-03-07T22:57:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e386d34c173852da81db90cbf5c6931e","kubernetes.io/config.mirror":"e386d34c173852da81db90cbf5c6931e","kubernetes.io/config.seen":"2024-03-07T22:57:38.166829506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 6980 chars]
	I0307 23:00:26.442064   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.442064   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.442064   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.442064   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.445692   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:26.445692   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.445738   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.445738   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.445738   13728 round_trippers.go:580]     Audit-Id: 8515bf4c-d2d2-4928-a7b5-3a065cf23e1b
	I0307 23:00:26.445792   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.445792   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.445792   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.445792   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.445792   13728 pod_ready.go:92] pod "kube-controller-manager-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:26.445792   13728 pod_ready.go:81] duration metric: took 8.7139ms for pod "kube-controller-manager-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.445792   13728 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ng97v" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.446407   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ng97v
	I0307 23:00:26.446463   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.446463   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.446463   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.449840   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:26.450085   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.450085   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.450085   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.450085   13728 round_trippers.go:580]     Audit-Id: 80440452-d7a9-4c25-9f6a-9284e5b36137
	I0307 23:00:26.450085   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.450085   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.450085   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.450085   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ng97v","generateName":"kube-proxy-","namespace":"kube-system","uid":"e5408fb9-13f3-46d1-9509-d0c312f0c175","resourceVersion":"554","creationTimestamp":"2024-03-07T22:57:58Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d256e9a6-9c99-4df9-b1d5-c8f7c69aaf64","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d256e9a6-9c99-4df9-b1d5-c8f7c69aaf64\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5876 chars]
	I0307 23:00:26.450852   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.450906   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.450906   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.450906   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.451534   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.453580   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.453580   13728 round_trippers.go:580]     Audit-Id: 9678393c-ff3c-4864-bdd4-3ffd8b0a5cac
	I0307 23:00:26.453580   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.453580   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.453580   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.453580   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.453580   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.453936   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.453968   13728 pod_ready.go:92] pod "kube-proxy-ng97v" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:26.453968   13728 pod_ready.go:81] duration metric: took 8.1763ms for pod "kube-proxy-ng97v" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.453968   13728 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.454607   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-934300
	I0307 23:00:26.454607   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.454607   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.454607   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.454949   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.454949   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.454949   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.454949   13728 round_trippers.go:580]     Audit-Id: bba3b679-edc5-4ffd-8334-3e25258bb9be
	I0307 23:00:26.454949   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.454949   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.454949   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.454949   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.457299   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-934300","namespace":"kube-system","uid":"a83c5c0c-4e51-4bf5-b002-5c4e59c782d8","resourceVersion":"570","creationTimestamp":"2024-03-07T22:57:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"605726bb8e0dce38fb04f185baf57dbe","kubernetes.io/config.mirror":"605726bb8e0dce38fb04f185baf57dbe","kubernetes.io/config.seen":"2024-03-07T22:57:38.166830506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4710 chars]
	I0307 23:00:26.458083   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.458142   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.458188   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.458188   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.459048   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.459048   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.459048   13728 round_trippers.go:580]     Audit-Id: 20baed43-9571-40b5-a8f8-1a58ff453c32
	I0307 23:00:26.459048   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.459048   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.459048   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.459048   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.459048   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.459048   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.461941   13728 pod_ready.go:92] pod "kube-scheduler-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:26.461972   13728 pod_ready.go:81] duration metric: took 8.0036ms for pod "kube-scheduler-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:26.461972   13728 pod_ready.go:38] duration metric: took 13.0644989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:00:26.461972   13728 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 23:00:26.470556   13728 command_runner.go:130] > -16
	I0307 23:00:26.478681   13728 ops.go:34] apiserver oom_adj: -16
	I0307 23:00:26.478681   13728 kubeadm.go:591] duration metric: took 21.9863243s to restartPrimaryControlPlane
	I0307 23:00:26.478681   13728 kubeadm.go:393] duration metric: took 22.0707167s to StartCluster
	I0307 23:00:26.478731   13728 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:00:26.478765   13728 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:00:26.479540   13728 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:00:26.480245   13728 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.58.27 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:00:26.480245   13728 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 23:00:26.484363   13728 out.go:177] * Verifying Kubernetes components...
	I0307 23:00:26.480245   13728 addons.go:69] Setting storage-provisioner=true in profile "functional-934300"
	I0307 23:00:26.480245   13728 addons.go:69] Setting default-storageclass=true in profile "functional-934300"
	I0307 23:00:26.481721   13728 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:00:26.484491   13728 addons.go:234] Setting addon storage-provisioner=true in "functional-934300"
	W0307 23:00:26.484556   13728 addons.go:243] addon storage-provisioner should already be in state true
	I0307 23:00:26.484556   13728 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-934300"
	I0307 23:00:26.484556   13728 host.go:66] Checking if "functional-934300" exists ...
	I0307 23:00:26.485482   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 23:00:26.489312   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 23:00:26.504797   13728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:00:26.750720   13728 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:00:26.775646   13728 node_ready.go:35] waiting up to 6m0s for node "functional-934300" to be "Ready" ...
	I0307 23:00:26.775903   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:26.775903   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.775903   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.775903   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.776562   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.780111   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.780111   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.780111   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.780111   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.780111   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.780111   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.780111   13728 round_trippers.go:580]     Audit-Id: 78b601f4-0f5e-40d0-9f8f-d46dd844e0d0
	I0307 23:00:26.780684   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:26.780989   13728 node_ready.go:49] node "functional-934300" has status "Ready":"True"
	I0307 23:00:26.780989   13728 node_ready.go:38] duration metric: took 5.2552ms for node "functional-934300" to be "Ready" ...
	I0307 23:00:26.780989   13728 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:00:26.832786   13728 request.go:629] Waited for 51.5251ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:26.832786   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:26.832994   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:26.832994   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:26.832994   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:26.833836   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:26.833836   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:26.837284   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:26 GMT
	I0307 23:00:26.837284   13728 round_trippers.go:580]     Audit-Id: ed72532d-8b05-44f4-ac02-be367dd8012d
	I0307 23:00:26.837284   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:26.837284   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:26.837284   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:26.837284   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:26.838528   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"559","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48054 chars]
	I0307 23:00:26.841015   13728 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:27.024778   13728 request.go:629] Waited for 183.6368ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:27.025001   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qckb6
	I0307 23:00:27.025079   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:27.025079   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:27.025079   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:27.028861   13728 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:00:27.028861   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:27.028861   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:27.028861   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:27.028861   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:27.028861   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:27.028861   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:27 GMT
	I0307 23:00:27.028861   13728 round_trippers.go:580]     Audit-Id: 9245d7c6-2fe9-40fe-a974-571fcf72bb78
	I0307 23:00:27.029416   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"559","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I0307 23:00:27.224696   13728 request.go:629] Waited for 195.2784ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:27.224768   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:27.224768   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:27.224768   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:27.224768   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:27.225918   13728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:00:27.225918   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:27.225918   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:27.225918   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:27.229342   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:27.229342   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:27 GMT
	I0307 23:00:27.229342   13728 round_trippers.go:580]     Audit-Id: f85166f1-3c93-4f3a-bbb3-d2a3e25a51d6
	I0307 23:00:27.229412   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:27.229412   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:27.230095   13728 pod_ready.go:92] pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:27.230095   13728 pod_ready.go:81] duration metric: took 389.0767ms for pod "coredns-5dd5756b68-qckb6" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:27.230095   13728 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:27.419715   13728 request.go:629] Waited for 189.433ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:27.419929   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/etcd-functional-934300
	I0307 23:00:27.419929   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:27.420013   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:27.420013   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:27.425037   13728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:00:27.425106   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:27.425106   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:27 GMT
	I0307 23:00:27.425106   13728 round_trippers.go:580]     Audit-Id: c74a4f0f-4d92-4d76-9ae2-9dbce87a6037
	I0307 23:00:27.425106   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:27.425106   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:27.425106   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:27.425106   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:27.425106   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-934300","namespace":"kube-system","uid":"dcc6bd79-f9bb-4acd-a050-37d15b5e949c","resourceVersion":"571","creationTimestamp":"2024-03-07T22:57:46Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.58.27:2379","kubernetes.io/config.hash":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.mirror":"b1f8ee2ae9b41b1c476f2aaaf2481101","kubernetes.io/config.seen":"2024-03-07T22:57:46.083699801Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5869 chars]
	I0307 23:00:27.623316   13728 request.go:629] Waited for 197.4839ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:27.623674   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:27.623719   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:27.623719   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:27.623791   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:27.624053   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:27.624053   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:27.624053   13728 round_trippers.go:580]     Audit-Id: eb200add-4556-4853-b06d-3888a02b161b
	I0307 23:00:27.624053   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:27.624053   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:27.624053   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:27.624053   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:27.624053   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:27 GMT
	I0307 23:00:27.628142   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:27.628442   13728 pod_ready.go:92] pod "etcd-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:27.628578   13728 pod_ready.go:81] duration metric: took 398.4796ms for pod "etcd-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:27.628578   13728 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:27.829469   13728 request.go:629] Waited for 200.6675ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-934300
	I0307 23:00:27.829469   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-934300
	I0307 23:00:27.829671   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:27.829671   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:27.829671   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:27.829898   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:27.829898   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:27.829898   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:27.833979   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:27.833979   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:27.834049   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:27 GMT
	I0307 23:00:27.834049   13728 round_trippers.go:580]     Audit-Id: 7c6718b8-d9db-47b3-9811-d1180c574c79
	I0307 23:00:27.834049   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:27.834446   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-934300","namespace":"kube-system","uid":"89292fed-5152-47c0-b3fa-44af37af8bc1","resourceVersion":"557","creationTimestamp":"2024-03-07T22:57:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.58.27:8441","kubernetes.io/config.hash":"dcbd74f28184f2ba6b30434803b70bb0","kubernetes.io/config.mirror":"dcbd74f28184f2ba6b30434803b70bb0","kubernetes.io/config.seen":"2024-03-07T22:57:38.166828307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7850 chars]
	I0307 23:00:28.023477   13728 request.go:629] Waited for 188.2179ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.023534   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.023534   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:28.023534   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:28.023534   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:28.024149   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:28.026809   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:28.026809   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:28 GMT
	I0307 23:00:28.026809   13728 round_trippers.go:580]     Audit-Id: 13be8ab4-80cc-49d2-8c24-47f71b5c0558
	I0307 23:00:28.026809   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:28.026809   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:28.026963   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:28.027007   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:28.027122   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:28.027205   13728 pod_ready.go:92] pod "kube-apiserver-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:28.027205   13728 pod_ready.go:81] duration metric: took 398.6235ms for pod "kube-apiserver-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:28.027205   13728 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:28.224980   13728 request.go:629] Waited for 197.5282ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-934300
	I0307 23:00:28.225250   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-934300
	I0307 23:00:28.225250   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:28.225301   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:28.225301   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:28.230618   13728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:00:28.230618   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:28.230618   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:28.230618   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:28.230618   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:28 GMT
	I0307 23:00:28.230618   13728 round_trippers.go:580]     Audit-Id: 26a188d6-d53d-4f09-a98f-6c36a3c3d285
	I0307 23:00:28.230618   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:28.230618   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:28.231254   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-934300","namespace":"kube-system","uid":"04393d46-35b0-4807-acd9-d46af0a8de3c","resourceVersion":"562","creationTimestamp":"2024-03-07T22:57:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e386d34c173852da81db90cbf5c6931e","kubernetes.io/config.mirror":"e386d34c173852da81db90cbf5c6931e","kubernetes.io/config.seen":"2024-03-07T22:57:38.166829506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 6980 chars]
	I0307 23:00:28.429666   13728 request.go:629] Waited for 197.5966ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.429795   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.429795   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:28.429795   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:28.429795   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:28.430935   13728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:00:28.430935   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:28.430935   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:28.430935   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:28.430935   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:28 GMT
	I0307 23:00:28.430935   13728 round_trippers.go:580]     Audit-Id: 6ae2f55c-8220-4816-ba5c-461e70aa082d
	I0307 23:00:28.430935   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:28.430935   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:28.434068   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:28.434068   13728 pod_ready.go:92] pod "kube-controller-manager-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:28.434068   13728 pod_ready.go:81] duration metric: took 406.859ms for pod "kube-controller-manager-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:28.434598   13728 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ng97v" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:28.500856   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:00:28.500856   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:28.500856   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:00:28.512373   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:28.517368   13728 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 23:00:28.513246   13728 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:00:28.518152   13728 kapi.go:59] client config for functional-934300: &rest.Config{Host:"https://172.20.58.27:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-934300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-934300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 23:00:28.521451   13728 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:00:28.521451   13728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 23:00:28.521451   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 23:00:28.522139   13728 addons.go:234] Setting addon default-storageclass=true in "functional-934300"
	W0307 23:00:28.522139   13728 addons.go:243] addon default-storageclass should already be in state true
	I0307 23:00:28.522665   13728 host.go:66] Checking if "functional-934300" exists ...
	I0307 23:00:28.523520   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 23:00:28.631009   13728 request.go:629] Waited for 196.1726ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ng97v
	I0307 23:00:28.631009   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-proxy-ng97v
	I0307 23:00:28.631286   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:28.631286   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:28.631286   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:28.631558   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:28.635583   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:28.635583   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:28.635583   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:28.635583   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:28.635583   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:28.635583   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:28 GMT
	I0307 23:00:28.635664   13728 round_trippers.go:580]     Audit-Id: 291e38e8-ad78-4eaf-acc4-6860fd957db8
	I0307 23:00:28.635894   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ng97v","generateName":"kube-proxy-","namespace":"kube-system","uid":"e5408fb9-13f3-46d1-9509-d0c312f0c175","resourceVersion":"554","creationTimestamp":"2024-03-07T22:57:58Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d256e9a6-9c99-4df9-b1d5-c8f7c69aaf64","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d256e9a6-9c99-4df9-b1d5-c8f7c69aaf64\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5876 chars]
	I0307 23:00:28.825125   13728 request.go:629] Waited for 188.1552ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.825197   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:28.825197   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:28.825197   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:28.825197   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:28.825892   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:28.829712   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:28.829782   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:28.829861   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:28 GMT
	I0307 23:00:28.829861   13728 round_trippers.go:580]     Audit-Id: 6d880a3d-f2aa-4494-87c4-63899ee5cd53
	I0307 23:00:28.829861   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:28.829861   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:28.829861   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:28.829861   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:28.830495   13728 pod_ready.go:92] pod "kube-proxy-ng97v" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:28.830495   13728 pod_ready.go:81] duration metric: took 395.8933ms for pod "kube-proxy-ng97v" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:28.830495   13728 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:29.021813   13728 request.go:629] Waited for 191.2692ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-934300
	I0307 23:00:29.022075   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-934300
	I0307 23:00:29.022075   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.022075   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.022075   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.029701   13728 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:00:29.029701   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.029701   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.029701   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.029701   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.029701   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.029701   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.029701   13728 round_trippers.go:580]     Audit-Id: 15751a20-1972-4d54-9a2c-042f5b6befae
	I0307 23:00:29.029701   13728 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-934300","namespace":"kube-system","uid":"a83c5c0c-4e51-4bf5-b002-5c4e59c782d8","resourceVersion":"570","creationTimestamp":"2024-03-07T22:57:45Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"605726bb8e0dce38fb04f185baf57dbe","kubernetes.io/config.mirror":"605726bb8e0dce38fb04f185baf57dbe","kubernetes.io/config.seen":"2024-03-07T22:57:38.166830506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4710 chars]
	I0307 23:00:29.228593   13728 request.go:629] Waited for 198.0327ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:29.228922   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes/functional-934300
	I0307 23:00:29.228922   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.228922   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.228922   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.229563   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:29.229563   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.229563   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.229563   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.229563   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.229563   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.233336   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.233336   13728 round_trippers.go:580]     Audit-Id: c2867d08-8338-4fb4-9a65-c36a1f699964
	I0307 23:00:29.233800   13728 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-03-07T22:57:42Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I0307 23:00:29.234246   13728 pod_ready.go:92] pod "kube-scheduler-functional-934300" in "kube-system" namespace has status "Ready":"True"
	I0307 23:00:29.234305   13728 pod_ready.go:81] duration metric: took 403.8066ms for pod "kube-scheduler-functional-934300" in "kube-system" namespace to be "Ready" ...
	I0307 23:00:29.234305   13728 pod_ready.go:38] duration metric: took 2.4532931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:00:29.234305   13728 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:00:29.244154   13728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:00:29.271488   13728 command_runner.go:130] > 7170
	I0307 23:00:29.271766   13728 api_server.go:72] duration metric: took 2.7914949s to wait for apiserver process to appear ...
	I0307 23:00:29.271766   13728 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:00:29.271766   13728 api_server.go:253] Checking apiserver healthz at https://172.20.58.27:8441/healthz ...
	I0307 23:00:29.282789   13728 api_server.go:279] https://172.20.58.27:8441/healthz returned 200:
	ok
	I0307 23:00:29.282937   13728 round_trippers.go:463] GET https://172.20.58.27:8441/version
	I0307 23:00:29.282937   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.282937   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.282937   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.285606   13728 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 23:00:29.285606   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.285606   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.285606   13728 round_trippers.go:580]     Audit-Id: 6dbc2942-3141-44a1-bb95-2c48b5d799a5
	I0307 23:00:29.285606   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.285606   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.285606   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.285606   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.285606   13728 round_trippers.go:580]     Content-Length: 264
	I0307 23:00:29.285606   13728 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0307 23:00:29.285606   13728 api_server.go:141] control plane version: v1.28.4
	I0307 23:00:29.285606   13728 api_server.go:131] duration metric: took 13.8399ms to wait for apiserver health ...
	I0307 23:00:29.286141   13728 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:00:29.422748   13728 request.go:629] Waited for 136.3003ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:29.422878   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:29.422878   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.422878   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.422935   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.423134   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:29.423134   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.423134   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.423134   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.423134   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.423134   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.427869   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.427869   13728 round_trippers.go:580]     Audit-Id: 82d80f1c-2aba-4be2-9dcb-9393b9472ac3
	I0307 23:00:29.429460   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"559","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48054 chars]
	I0307 23:00:29.431774   13728 system_pods.go:59] 7 kube-system pods found
	I0307 23:00:29.431774   13728 system_pods.go:61] "coredns-5dd5756b68-qckb6" [1d70d200-b84d-406f-a812-aeada0591d68] Running
	I0307 23:00:29.431774   13728 system_pods.go:61] "etcd-functional-934300" [dcc6bd79-f9bb-4acd-a050-37d15b5e949c] Running
	I0307 23:00:29.431909   13728 system_pods.go:61] "kube-apiserver-functional-934300" [89292fed-5152-47c0-b3fa-44af37af8bc1] Running
	I0307 23:00:29.431909   13728 system_pods.go:61] "kube-controller-manager-functional-934300" [04393d46-35b0-4807-acd9-d46af0a8de3c] Running
	I0307 23:00:29.431909   13728 system_pods.go:61] "kube-proxy-ng97v" [e5408fb9-13f3-46d1-9509-d0c312f0c175] Running
	I0307 23:00:29.431909   13728 system_pods.go:61] "kube-scheduler-functional-934300" [a83c5c0c-4e51-4bf5-b002-5c4e59c782d8] Running
	I0307 23:00:29.431909   13728 system_pods.go:61] "storage-provisioner" [c743467f-e104-4404-b662-be573f6ec4a0] Running
	I0307 23:00:29.431909   13728 system_pods.go:74] duration metric: took 145.7659ms to wait for pod list to return data ...
	I0307 23:00:29.432080   13728 default_sa.go:34] waiting for default service account to be created ...
	I0307 23:00:29.620850   13728 request.go:629] Waited for 188.4268ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/default/serviceaccounts
	I0307 23:00:29.620979   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/default/serviceaccounts
	I0307 23:00:29.620979   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.621106   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.621106   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.621385   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:29.624977   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.624977   13728 round_trippers.go:580]     Audit-Id: 5518c06f-9a11-4996-8e0a-acea495ec357
	I0307 23:00:29.624977   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.624977   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.625044   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.625044   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.625044   13728 round_trippers.go:580]     Content-Length: 261
	I0307 23:00:29.625081   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.625104   13728 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5b73c644-ccc7-4b70-b8b9-65cac234ce31","resourceVersion":"310","creationTimestamp":"2024-03-07T22:57:58Z"}}]}
	I0307 23:00:29.625311   13728 default_sa.go:45] found service account: "default"
	I0307 23:00:29.625418   13728 default_sa.go:55] duration metric: took 193.3362ms for default service account to be created ...
	I0307 23:00:29.625418   13728 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 23:00:29.828974   13728 request.go:629] Waited for 203.4442ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:29.828974   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/namespaces/kube-system/pods
	I0307 23:00:29.828974   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:29.828974   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:29.828974   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:29.835917   13728 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:00:29.835917   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:29.835917   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:29.835917   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:29.835917   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:29.835917   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:29.835917   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:29 GMT
	I0307 23:00:29.835917   13728 round_trippers.go:580]     Audit-Id: 90234393-8c2d-47d4-a9f1-ec12301330e4
	I0307 23:00:29.836975   13728 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"571"},"items":[{"metadata":{"name":"coredns-5dd5756b68-qckb6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"1d70d200-b84d-406f-a812-aeada0591d68","resourceVersion":"559","creationTimestamp":"2024-03-07T22:57:59Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2c0987df-351c-4506-a2b9-9c879d4c0fca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-07T22:57:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c0987df-351c-4506-a2b9-9c879d4c0fca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48054 chars]
	I0307 23:00:29.839457   13728 system_pods.go:86] 7 kube-system pods found
	I0307 23:00:29.839457   13728 system_pods.go:89] "coredns-5dd5756b68-qckb6" [1d70d200-b84d-406f-a812-aeada0591d68] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "etcd-functional-934300" [dcc6bd79-f9bb-4acd-a050-37d15b5e949c] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "kube-apiserver-functional-934300" [89292fed-5152-47c0-b3fa-44af37af8bc1] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "kube-controller-manager-functional-934300" [04393d46-35b0-4807-acd9-d46af0a8de3c] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "kube-proxy-ng97v" [e5408fb9-13f3-46d1-9509-d0c312f0c175] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "kube-scheduler-functional-934300" [a83c5c0c-4e51-4bf5-b002-5c4e59c782d8] Running
	I0307 23:00:29.839457   13728 system_pods.go:89] "storage-provisioner" [c743467f-e104-4404-b662-be573f6ec4a0] Running
	I0307 23:00:29.839457   13728 system_pods.go:126] duration metric: took 214.0374ms to wait for k8s-apps to be running ...
	I0307 23:00:29.839457   13728 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 23:00:29.856605   13728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:00:29.881414   13728 system_svc.go:56] duration metric: took 41.9562ms WaitForService to wait for kubelet
	I0307 23:00:29.881414   13728 kubeadm.go:576] duration metric: took 3.4011368s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:00:29.881414   13728 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:00:30.026423   13728 request.go:629] Waited for 144.737ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.27:8441/api/v1/nodes
	I0307 23:00:30.026613   13728 round_trippers.go:463] GET https://172.20.58.27:8441/api/v1/nodes
	I0307 23:00:30.026613   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:30.026691   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:30.026691   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:30.032556   13728 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:00:30.032556   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:30.032556   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:30 GMT
	I0307 23:00:30.032556   13728 round_trippers.go:580]     Audit-Id: 06a20b2e-838b-4467-90d6-096ab6c9a338
	I0307 23:00:30.032556   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:30.032556   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:30.032556   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:30.032556   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:30.033103   13728 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"functional-934300","uid":"161d9b06-5e82-4136-b260-e7c46cd5b36d","resourceVersion":"492","creationTimestamp":"2024-03-07T22:57:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-934300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"functional-934300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_07T22_57_46_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I0307 23:00:30.033984   13728 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:00:30.034069   13728 node_conditions.go:123] node cpu capacity is 2
	I0307 23:00:30.034151   13728 node_conditions.go:105] duration metric: took 152.7351ms to run NodePressure ...
	I0307 23:00:30.034182   13728 start.go:240] waiting for startup goroutines ...
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 23:00:30.537847   13728 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 23:00:30.537847   13728 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 23:00:30.537847   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
	I0307 23:00:32.511531   13728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:00:32.511531   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:32.511531   13728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
	I0307 23:00:32.852139   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 23:00:32.852218   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:32.852218   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 23:00:32.980949   13728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:00:33.851720   13728 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0307 23:00:33.851770   13728 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0307 23:00:33.851813   13728 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0307 23:00:33.851813   13728 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0307 23:00:33.851813   13728 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0307 23:00:33.851813   13728 command_runner.go:130] > pod/storage-provisioner configured
	I0307 23:00:34.719607   13728 main.go:141] libmachine: [stdout =====>] : 172.20.58.27
	
	I0307 23:00:34.719607   13728 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:00:34.730186   13728 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
	I0307 23:00:34.852545   13728 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 23:00:35.057268   13728 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0307 23:00:35.057358   13728 round_trippers.go:463] GET https://172.20.58.27:8441/apis/storage.k8s.io/v1/storageclasses
	I0307 23:00:35.057358   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:35.057358   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:35.057358   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:35.058159   13728 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0307 23:00:35.058159   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:35.058159   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:35.058159   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:35.058159   13728 round_trippers.go:580]     Content-Length: 1273
	I0307 23:00:35.058159   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:35 GMT
	I0307 23:00:35.058159   13728 round_trippers.go:580]     Audit-Id: 58646065-a420-4b05-b40a-3000facb8efe
	I0307 23:00:35.058159   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:35.058159   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:35.060649   13728 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"standard","uid":"eee38963-7aa3-419f-89b2-7623b5e954fb","resourceVersion":"392","creationTimestamp":"2024-03-07T22:58:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-07T22:58:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0307 23:00:35.061479   13728 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eee38963-7aa3-419f-89b2-7623b5e954fb","resourceVersion":"392","creationTimestamp":"2024-03-07T22:58:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-07T22:58:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0307 23:00:35.061591   13728 round_trippers.go:463] PUT https://172.20.58.27:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0307 23:00:35.061591   13728 round_trippers.go:469] Request Headers:
	I0307 23:00:35.061652   13728 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:00:35.061652   13728 round_trippers.go:473]     Content-Type: application/json
	I0307 23:00:35.061673   13728 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:00:35.063383   13728 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:00:35.065256   13728 round_trippers.go:577] Response Headers:
	I0307 23:00:35.065297   13728 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e9c1a35f-9029-4e54-b808-69f3948db0ab
	I0307 23:00:35.065297   13728 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a81a1484-4f1f-487c-af9e-e971844251a7
	I0307 23:00:35.065297   13728 round_trippers.go:580]     Content-Length: 1220
	I0307 23:00:35.065328   13728 round_trippers.go:580]     Date: Thu, 07 Mar 2024 23:00:35 GMT
	I0307 23:00:35.065328   13728 round_trippers.go:580]     Audit-Id: e6193fa0-b29e-4300-891d-67668da531a6
	I0307 23:00:35.065328   13728 round_trippers.go:580]     Cache-Control: no-cache, private
	I0307 23:00:35.065328   13728 round_trippers.go:580]     Content-Type: application/json
	I0307 23:00:35.065328   13728 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"eee38963-7aa3-419f-89b2-7623b5e954fb","resourceVersion":"392","creationTimestamp":"2024-03-07T22:58:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-07T22:58:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0307 23:00:35.069498   13728 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 23:00:35.072524   13728 addons.go:505] duration metric: took 8.5921982s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0307 23:00:35.072765   13728 start.go:245] waiting for cluster config update ...
	I0307 23:00:35.072765   13728 start.go:254] writing updated cluster config ...
	I0307 23:00:35.084152   13728 ssh_runner.go:195] Run: rm -f paused
	I0307 23:00:35.211381   13728 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 23:00:35.219246   13728 out.go:177] * Done! kubectl is now configured to use "functional-934300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.189915578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.189929576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.190017267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.531660107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.532041165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.532065462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.533737478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:08 functional-934300 cri-dockerd[5809]: time="2024-03-07T23:00:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a15c8c9b2660d7914c0794bcc0fe7f8b66f0c746ea3de26f5707372e93f5fa48/resolv.conf as [nameserver 172.20.48.1]"
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.809823344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.809945931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.809981127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:08 functional-934300 dockerd[5598]: time="2024-03-07T23:00:08.810120411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:11 functional-934300 cri-dockerd[5809]: time="2024-03-07T23:00:11Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.505309033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.505655400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.506290540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.507168056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.672024356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.673359629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.673522213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.673791788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.677147468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.677214662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.677230860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:00:12 functional-934300 dockerd[5598]: time="2024-03-07T23:00:12.677402544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	df6cd9995aa52       83f6cc407eed8       About a minute ago   Running             kube-proxy                2                   ebbd4e47f17bc       kube-proxy-ng97v
	9e17007dd9ec5       ead0a4a53df89       About a minute ago   Running             coredns                   1                   96f9056d614a7       coredns-5dd5756b68-qckb6
	feafd6091f055       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   3a0f049f7b9c1       storage-provisioner
	e8bf28c0a7593       d058aa5ab969c       2 minutes ago        Running             kube-controller-manager   2                   a15c8c9b2660d       kube-controller-manager-functional-934300
	eaa8d4d5d3d96       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler            2                   1373dd831066a       kube-scheduler-functional-934300
	ed25507207c37       73deb9a3f7025       2 minutes ago        Running             etcd                      2                   715967e414eed       etcd-functional-934300
	4c7579301f863       7fe0e6f37db33       2 minutes ago        Running             kube-apiserver            2                   23b17b46dedec       kube-apiserver-functional-934300
	567c79dc1dd91       e3db313c6dbc0       2 minutes ago        Created             kube-scheduler            1                   d7fc57efa5ea4       kube-scheduler-functional-934300
	617eadabe5ef2       d058aa5ab969c       2 minutes ago        Created             kube-controller-manager   1                   c0d47fefb077e       kube-controller-manager-functional-934300
	5b1ca17730949       73deb9a3f7025       2 minutes ago        Created             etcd                      1                   94ca0d6ca93b7       etcd-functional-934300
	fc0ede9e42654       83f6cc407eed8       2 minutes ago        Created             kube-proxy                1                   963ed656114ae       kube-proxy-ng97v
	c6f5d3e3d6a43       6e38f40d628db       2 minutes ago        Created             storage-provisioner       1                   9f88227834c1b       storage-provisioner
	e70fd53104d3c       7fe0e6f37db33       2 minutes ago        Created             kube-apiserver            1                   de47de55e13b9       kube-apiserver-functional-934300
	da6887f6c3fe6       ead0a4a53df89       4 minutes ago        Exited              coredns                   0                   14b951d3a571c       coredns-5dd5756b68-qckb6
	
	
	==> coredns [9e17007dd9ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33145 - 60180 "HINFO IN 7272459514629097245.5864463020974771818. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067170736s
	
	
	==> coredns [da6887f6c3fe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:58631 - 40772 "HINFO IN 6191384107876191056.3903756267142074151. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.125622176s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-934300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-934300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=functional-934300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T22_57_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 22:57:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-934300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:02:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:01:44 +0000   Thu, 07 Mar 2024 22:57:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:01:44 +0000   Thu, 07 Mar 2024 22:57:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:01:44 +0000   Thu, 07 Mar 2024 22:57:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:01:44 +0000   Thu, 07 Mar 2024 22:57:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.58.27
	  Hostname:    functional-934300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 75a0bf21d5fd492888306101263131d2
	  System UUID:                5fa9e18a-aa3f-dc4f-9068-e96f4a4fa09e
	  Boot ID:                    7f99cc34-e7c9-4f4d-8aad-ebdb1d45308e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-qckb6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m12s
	  kube-system                 etcd-functional-934300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-apiserver-functional-934300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-functional-934300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-proxy-ng97v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-functional-934300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m10s                kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientPID     4m25s                kubelet          Node functional-934300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m25s                kubelet          Node functional-934300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s                kubelet          Node functional-934300 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                4m25s                kubelet          Node functional-934300 status is now: NodeReady
	  Normal  Starting                 4m25s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m13s                node-controller  Node functional-934300 event: Registered Node functional-934300 in Controller
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node functional-934300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node functional-934300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node functional-934300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                 node-controller  Node functional-934300 event: Registered Node functional-934300 in Controller
	
	
	==> dmesg <==
	[  +0.083853] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.478233] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[  +4.640570] systemd-fstab-generator[1738]: Ignoring "noauto" option for root device
	[  +0.087837] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.272507] systemd-fstab-generator[2699]: Ignoring "noauto" option for root device
	[  +0.103729] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.491652] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.014959] systemd-fstab-generator[3151]: Ignoring "noauto" option for root device
	[Mar 7 22:58] kauditd_printk_skb: 80 callbacks suppressed
	[ +32.691226] kauditd_printk_skb: 8 callbacks suppressed
	[Mar 7 22:59] systemd-fstab-generator[5125]: Ignoring "noauto" option for root device
	[  +0.544534] systemd-fstab-generator[5161]: Ignoring "noauto" option for root device
	[  +0.219526] systemd-fstab-generator[5173]: Ignoring "noauto" option for root device
	[  +0.243533] systemd-fstab-generator[5187]: Ignoring "noauto" option for root device
	[  +5.218221] kauditd_printk_skb: 89 callbacks suppressed
	[Mar 7 23:00] systemd-fstab-generator[5762]: Ignoring "noauto" option for root device
	[  +0.165607] systemd-fstab-generator[5774]: Ignoring "noauto" option for root device
	[  +0.179429] systemd-fstab-generator[5786]: Ignoring "noauto" option for root device
	[  +0.217091] systemd-fstab-generator[5801]: Ignoring "noauto" option for root device
	[  +0.732090] systemd-fstab-generator[5949]: Ignoring "noauto" option for root device
	[  +1.865586] kauditd_printk_skb: 179 callbacks suppressed
	[  +1.609106] systemd-fstab-generator[6909]: Ignoring "noauto" option for root device
	[  +5.855956] kauditd_printk_skb: 75 callbacks suppressed
	[ +11.784880] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.265367] systemd-fstab-generator[7891]: Ignoring "noauto" option for root device
	
	
	==> etcd [5b1ca1773094] <==
	
	
	==> etcd [ed25507207c3] <==
	{"level":"info","ts":"2024-03-07T23:00:08.455987Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T23:00:08.455996Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-07T23:00:08.456186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c switched to configuration voters=(2623288076745814172)"}
	{"level":"info","ts":"2024-03-07T23:00:08.456235Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"15fd5ac31f9bc93f","local-member-id":"2467ca9b65f3b09c","added-peer-id":"2467ca9b65f3b09c","added-peer-peer-urls":["https://172.20.58.27:2380"]}
	{"level":"info","ts":"2024-03-07T23:00:08.456323Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"15fd5ac31f9bc93f","local-member-id":"2467ca9b65f3b09c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T23:00:08.456391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T23:00:08.472082Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T23:00:08.472898Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.58.27:2380"}
	{"level":"info","ts":"2024-03-07T23:00:08.473013Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.58.27:2380"}
	{"level":"info","ts":"2024-03-07T23:00:08.476053Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2467ca9b65f3b09c","initial-advertise-peer-urls":["https://172.20.58.27:2380"],"listen-peer-urls":["https://172.20.58.27:2380"],"advertise-client-urls":["https://172.20.58.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.58.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T23:00:08.477341Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T23:00:10.290561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-07T23:00:10.290897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-07T23:00:10.290939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c received MsgPreVoteResp from 2467ca9b65f3b09c at term 2"}
	{"level":"info","ts":"2024-03-07T23:00:10.290954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c became candidate at term 3"}
	{"level":"info","ts":"2024-03-07T23:00:10.290962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c received MsgVoteResp from 2467ca9b65f3b09c at term 3"}
	{"level":"info","ts":"2024-03-07T23:00:10.290973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2467ca9b65f3b09c became leader at term 3"}
	{"level":"info","ts":"2024-03-07T23:00:10.290987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2467ca9b65f3b09c elected leader 2467ca9b65f3b09c at term 3"}
	{"level":"info","ts":"2024-03-07T23:00:10.300644Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2467ca9b65f3b09c","local-member-attributes":"{Name:functional-934300 ClientURLs:[https://172.20.58.27:2379]}","request-path":"/0/members/2467ca9b65f3b09c/attributes","cluster-id":"15fd5ac31f9bc93f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T23:00:10.300688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T23:00:10.300923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T23:00:10.302016Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.58.27:2379"}
	{"level":"info","ts":"2024-03-07T23:00:10.302058Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T23:00:10.302394Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T23:00:10.302433Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:02:11 up 6 min,  0 users,  load average: 0.42, 0.43, 0.22
	Linux functional-934300 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c7579301f86] <==
	I0307 23:00:11.586549       1 available_controller.go:423] Starting AvailableConditionController
	I0307 23:00:11.613097       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0307 23:00:11.586558       1 controller.go:78] Starting OpenAPI AggregationController
	I0307 23:00:11.706247       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0307 23:00:11.711628       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0307 23:00:11.712514       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0307 23:00:11.717746       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0307 23:00:11.719396       1 aggregator.go:166] initial CRD sync complete...
	I0307 23:00:11.719492       1 autoregister_controller.go:141] Starting autoregister controller
	I0307 23:00:11.719644       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0307 23:00:11.719773       1 cache.go:39] Caches are synced for autoregister controller
	I0307 23:00:11.726880       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 23:00:11.784315       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0307 23:00:11.784395       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0307 23:00:11.784403       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0307 23:00:11.785314       1 shared_informer.go:318] Caches are synced for configmaps
	E0307 23:00:11.819272       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0307 23:00:12.596281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0307 23:00:13.260324       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0307 23:00:13.275477       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0307 23:00:13.342409       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0307 23:00:13.380907       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 23:00:13.389948       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0307 23:00:24.308532       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 23:00:24.352071       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e70fd53104d3] <==
	
	
	==> kube-controller-manager [617eadabe5ef] <==
	
	
	==> kube-controller-manager [e8bf28c0a759] <==
	I0307 23:00:24.313060       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0307 23:00:24.313394       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0307 23:00:24.313430       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0307 23:00:24.313439       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0307 23:00:24.314688       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0307 23:00:24.324370       1 shared_informer.go:318] Caches are synced for attach detach
	I0307 23:00:24.327481       1 shared_informer.go:318] Caches are synced for crt configmap
	I0307 23:00:24.329903       1 shared_informer.go:318] Caches are synced for expand
	I0307 23:00:24.332536       1 shared_informer.go:318] Caches are synced for taint
	I0307 23:00:24.332713       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0307 23:00:24.332885       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-934300"
	I0307 23:00:24.332995       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0307 23:00:24.332921       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0307 23:00:24.333797       1 taint_manager.go:210] "Sending events to api server"
	I0307 23:00:24.333852       1 event.go:307] "Event occurred" object="functional-934300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-934300 event: Registered Node functional-934300 in Controller"
	I0307 23:00:24.341240       1 shared_informer.go:318] Caches are synced for endpoint
	I0307 23:00:24.353886       1 shared_informer.go:318] Caches are synced for cronjob
	I0307 23:00:24.377636       1 shared_informer.go:318] Caches are synced for disruption
	I0307 23:00:24.425628       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 23:00:24.445964       1 shared_informer.go:318] Caches are synced for resource quota
	I0307 23:00:24.471813       1 shared_informer.go:318] Caches are synced for service account
	I0307 23:00:24.477379       1 shared_informer.go:318] Caches are synced for namespace
	I0307 23:00:24.825237       1 shared_informer.go:318] Caches are synced for garbage collector
	I0307 23:00:24.825357       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0307 23:00:24.838078       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [df6cd9995aa5] <==
	I0307 23:00:12.860153       1 server_others.go:69] "Using iptables proxy"
	I0307 23:00:12.884812       1 node.go:141] Successfully retrieved node IP: 172.20.58.27
	I0307 23:00:12.929002       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 23:00:12.929041       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 23:00:12.931211       1 server_others.go:152] "Using iptables Proxier"
	I0307 23:00:12.931259       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 23:00:12.931760       1 server.go:846] "Version info" version="v1.28.4"
	I0307 23:00:12.931788       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 23:00:12.933081       1 config.go:188] "Starting service config controller"
	I0307 23:00:12.933163       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 23:00:12.933183       1 config.go:97] "Starting endpoint slice config controller"
	I0307 23:00:12.933188       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 23:00:12.933790       1 config.go:315] "Starting node config controller"
	I0307 23:00:12.933819       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 23:00:13.034350       1 shared_informer.go:318] Caches are synced for node config
	I0307 23:00:13.034399       1 shared_informer.go:318] Caches are synced for service config
	I0307 23:00:13.034420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fc0ede9e4265] <==
	
	
	==> kube-scheduler [567c79dc1dd9] <==
	
	
	==> kube-scheduler [eaa8d4d5d3d9] <==
	I0307 23:00:09.111637       1 serving.go:348] Generated self-signed cert in-memory
	W0307 23:00:11.681439       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 23:00:11.681629       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 23:00:11.681676       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 23:00:11.681746       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 23:00:11.732826       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0307 23:00:11.732917       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 23:00:11.735287       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0307 23:00:11.735575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0307 23:00:11.735775       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 23:00:11.735923       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0307 23:00:11.836683       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.777701    6916 kubelet_node_status.go:73] "Successfully registered node" node="functional-934300"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.779270    6916 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.780179    6916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.950527    6916 apiserver.go:52] "Watching apiserver"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.953249    6916 topology_manager.go:215] "Topology Admit Handler" podUID="e5408fb9-13f3-46d1-9509-d0c312f0c175" podNamespace="kube-system" podName="kube-proxy-ng97v"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.953406    6916 topology_manager.go:215] "Topology Admit Handler" podUID="1d70d200-b84d-406f-a812-aeada0591d68" podNamespace="kube-system" podName="coredns-5dd5756b68-qckb6"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.953537    6916 topology_manager.go:215] "Topology Admit Handler" podUID="c743467f-e104-4404-b662-be573f6ec4a0" podNamespace="kube-system" podName="storage-provisioner"
	Mar 07 23:00:11 functional-934300 kubelet[6916]: I0307 23:00:11.981138    6916 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.008287    6916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c743467f-e104-4404-b662-be573f6ec4a0-tmp\") pod \"storage-provisioner\" (UID: \"c743467f-e104-4404-b662-be573f6ec4a0\") " pod="kube-system/storage-provisioner"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.008955    6916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5408fb9-13f3-46d1-9509-d0c312f0c175-xtables-lock\") pod \"kube-proxy-ng97v\" (UID: \"e5408fb9-13f3-46d1-9509-d0c312f0c175\") " pod="kube-system/kube-proxy-ng97v"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.009050    6916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5408fb9-13f3-46d1-9509-d0c312f0c175-lib-modules\") pod \"kube-proxy-ng97v\" (UID: \"e5408fb9-13f3-46d1-9509-d0c312f0c175\") " pod="kube-system/kube-proxy-ng97v"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.254164    6916 scope.go:117] "RemoveContainer" containerID="c6f5d3e3d6a43e777f06ce4b27481c16dd4e633faf1cc52a32bd288f41dfad25"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.256650    6916 scope.go:117] "RemoveContainer" containerID="da6887f6c3fe6a8ae07ca5a9b1fa7ae1f9f55baf8fd0347857f61181304075c1"
	Mar 07 23:00:12 functional-934300 kubelet[6916]: I0307 23:00:12.257023    6916 scope.go:117] "RemoveContainer" containerID="fc0ede9e426541b0dbfb6f9f40f278417c35fc034bb97ac7b913c5f8744672af"
	Mar 07 23:00:21 functional-934300 kubelet[6916]: I0307 23:00:21.947971    6916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 07 23:01:07 functional-934300 kubelet[6916]: E0307 23:01:07.045953    6916 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:01:07 functional-934300 kubelet[6916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:01:07 functional-934300 kubelet[6916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:01:07 functional-934300 kubelet[6916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:01:07 functional-934300 kubelet[6916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:02:07 functional-934300 kubelet[6916]: E0307 23:02:07.047046    6916 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:02:07 functional-934300 kubelet[6916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:02:07 functional-934300 kubelet[6916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:02:07 functional-934300 kubelet[6916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:02:07 functional-934300 kubelet[6916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [c6f5d3e3d6a4] <==
	
	
	==> storage-provisioner [feafd6091f05] <==
	I0307 23:00:12.577090       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 23:00:12.602040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 23:00:12.602270       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 23:00:30.010941       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 23:00:30.011075       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-934300_3dccd8f0-ca58-4516-b334-68337f277862!
	I0307 23:00:30.011514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e984042c-ea8a-4d64-8a1e-393a63c0106f", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-934300_3dccd8f0-ca58-4516-b334-68337f277862 became leader
	I0307 23:00:30.112277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-934300_3dccd8f0-ca58-4516-b334-68337f277862!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:02:03.952682   10188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-934300 -n functional-934300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-934300 -n functional-934300: (10.9925866s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-934300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (30.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config unset cpus" to be -""- but got *"W0307 23:04:58.533689    2356 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 config get cpus: exit status 14 (263.8027ms)

                                                
                                                
** stderr ** 
	W0307 23:04:58.857135    8508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0307 23:04:58.857135    8508 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0307 23:04:59.136656    4688 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config get cpus" to be -""- but got *"W0307 23:04:59.469440   13276 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config unset cpus" to be -""- but got *"W0307 23:04:59.769147    1752 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 config get cpus: exit status 14 (251.8776ms)

                                                
                                                
** stderr ** 
	W0307 23:05:00.070150    9676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-934300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0307 23:05:00.070150    9676 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 service --namespace=default --https --url hello-node: exit status 1 (15.0564031s)

                                                
                                                
** stderr ** 
	W0307 23:05:45.125483   10028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-934300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url --format={{.IP}}
E0307 23:06:00.547054    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url --format={{.IP}}: exit status 1 (15.0062866s)

                                                
                                                
** stderr ** 
	W0307 23:06:00.262201    8292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url: exit status 1 (15.0246133s)

                                                
                                                
** stderr ** 
	W0307 23:06:15.221748   14316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-934300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (66.62s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- sh -c "ping -c 1 172.20.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.4918236s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:22:59.008577     800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-5b5d89c9d6-8vztn): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- sh -c "ping -c 1 172.20.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.4944631s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:23:10.028645    6644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-5b5d89c9d6-dswbq): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- sh -c "ping -c 1 172.20.48.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.4805724s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:23:20.998777   14088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.48.1) from pod (busybox-5b5d89c9d6-wmtt9): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-792400 -n ha-792400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-792400 -n ha-792400: (11.4959772s)
helpers_test.go:244: <<< TestMutliControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 logs -n 25: (8.3312002s)
helpers_test.go:252: TestMutliControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-934300                    | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:08 UTC | 07 Mar 24 23:08 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-934300 image build -t     | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:08 UTC | 07 Mar 24 23:09 UTC |
	|         | localhost/my-image:functional-934300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-934300 image ls           | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:09 UTC | 07 Mar 24 23:09 UTC |
	| delete  | -p functional-934300                 | functional-934300 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:10 UTC | 07 Mar 24 23:11 UTC |
	| start   | -p ha-792400 --wait=true             | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:11 UTC | 07 Mar 24 23:22 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- apply -f             | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- rollout status       | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- get pods -o          | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- get pods -o          | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-8vztn --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-dswbq --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-wmtt9 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-8vztn --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-dswbq --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-wmtt9 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-8vztn -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-dswbq -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-wmtt9 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- get pods -o          | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC | 07 Mar 24 23:22 UTC |
	|         | busybox-5b5d89c9d6-8vztn             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:22 UTC |                     |
	|         | busybox-5b5d89c9d6-8vztn -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.48.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:23 UTC | 07 Mar 24 23:23 UTC |
	|         | busybox-5b5d89c9d6-dswbq             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:23 UTC |                     |
	|         | busybox-5b5d89c9d6-dswbq -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.48.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:23 UTC | 07 Mar 24 23:23 UTC |
	|         | busybox-5b5d89c9d6-wmtt9             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-792400 -- exec                 | ha-792400         | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:23 UTC |                     |
	|         | busybox-5b5d89c9d6-wmtt9 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.48.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 23:11:38
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 23:11:38.444444    6816 out.go:291] Setting OutFile to fd 1008 ...
	I0307 23:11:38.444444    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:11:38.444444    6816 out.go:304] Setting ErrFile to fd 808...
	I0307 23:11:38.444444    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:11:38.468066    6816 out.go:298] Setting JSON to false
	I0307 23:11:38.469810    6816 start.go:129] hostinfo: {"hostname":"minikube7","uptime":12052,"bootTime":1709841045,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 23:11:38.469810    6816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 23:11:38.472877    6816 out.go:177] * [ha-792400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 23:11:38.479638    6816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:11:38.478397    6816 notify.go:220] Checking for updates...
	I0307 23:11:38.482239    6816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 23:11:38.484603    6816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 23:11:38.487541    6816 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 23:11:38.489679    6816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 23:11:38.493211    6816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 23:11:43.038717    6816 out.go:177] * Using the hyperv driver based on user configuration
	I0307 23:11:43.045309    6816 start.go:297] selected driver: hyperv
	I0307 23:11:43.045309    6816 start.go:901] validating driver "hyperv" against <nil>
	I0307 23:11:43.045309    6816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 23:11:43.091556    6816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 23:11:43.092441    6816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:11:43.092441    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:11:43.092441    6816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 23:11:43.092441    6816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 23:11:43.092441    6816 start.go:340] cluster config:
	{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:11:43.093023    6816 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 23:11:43.098711    6816 out.go:177] * Starting "ha-792400" primary control-plane node in "ha-792400" cluster
	I0307 23:11:43.099873    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:11:43.099873    6816 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 23:11:43.099873    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:11:43.102273    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:11:43.102483    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:11:43.102664    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:11:43.103166    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json: {Name:mkf5192d5b57415acf5d5449be46341d91e1b9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:11:43.103954    6816 start.go:360] acquireMachinesLock for ha-792400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:11:43.104387    6816 start.go:364] duration metric: took 388.2µs to acquireMachinesLock for "ha-792400"
	I0307 23:11:43.104505    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:11:43.104505    6816 start.go:125] createHost starting for "" (driver="hyperv")
	I0307 23:11:43.105687    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:11:43.107584    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:11:43.107584    6816 client.go:168] LocalClient.Create starting
	I0307 23:11:43.109801    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:11:43.110323    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:11:43.110323    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:11:43.110548    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:11:43.110672    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:11:43.110672    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:11:43.110672    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:11:44.801566    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:11:44.801566    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:44.810549    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:11:46.263991    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:11:46.263991    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:46.264332    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:11:50.483353    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:11:50.483541    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:50.485413    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:11:50.977315    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:11:51.085998    6816 main.go:141] libmachine: Creating VM...
	I0307 23:11:51.085998    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:11:53.479107    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:11:53.583884    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:53.598492    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:11:53.598799    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:11:55.078860    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:11:55.090174    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:55.090266    6816 main.go:141] libmachine: Creating VHD
	I0307 23:11:55.090362    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:11:58.356006    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 862C84AF-F98E-4909-8B61-C2162CA03912
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:11:58.356006    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:58.356006    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:11:58.356006    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:11:58.362814    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:12:01.153134    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:01.162819    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:01.162819    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd' -SizeBytes 20000MB
	I0307 23:12:03.418843    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:03.428215    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:03.428286    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:12:06.566864    6816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-792400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:12:06.566864    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:06.566955    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400 -DynamicMemoryEnabled $false
	I0307 23:12:08.431004    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:08.431004    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:08.431266    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400 -Count 2
	I0307 23:12:10.270550    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:10.270550    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:10.280036    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\boot2docker.iso'
	I0307 23:12:12.418383    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:12.418383    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:12.428259    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd'
	I0307 23:12:14.669648    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:14.669648    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:14.674712    6816 main.go:141] libmachine: Starting VM...
	I0307 23:12:14.674712    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:17.329220    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:19.249082    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:19.249082    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:19.256896    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:21.396309    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:21.396309    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:22.409940    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:24.349508    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:24.353234    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:24.353315    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:26.605658    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:26.605658    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:27.609373    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:29.477167    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:29.487971    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:29.487971    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:31.709696    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:31.709696    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:32.716688    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:34.608950    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:34.609709    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:34.609709    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:36.872811    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:36.872811    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:37.885900    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:39.837462    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:39.837462    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:39.848153    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:41.995938    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:41.995938    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:42.006202    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:43.776393    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:43.787163    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:43.787256    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:12:43.787410    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:45.572230    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:45.572230    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:45.582468    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:47.734483    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:47.745011    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:47.750118    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:47.757774    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:47.757774    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:12:47.875084    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:12:47.875084    6816 buildroot.go:166] provisioning hostname "ha-792400"
	I0307 23:12:47.875155    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:49.660460    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:49.660556    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:49.660556    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:51.799596    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:51.799596    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:51.804596    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:51.805295    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:51.805295    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400 && echo "ha-792400" | sudo tee /etc/hostname
	I0307 23:12:51.941788    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400
	
	I0307 23:12:51.941788    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:53.726495    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:53.732406    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:53.732461    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:55.876389    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:55.886880    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:55.892693    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:55.892850    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:55.892850    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:12:56.022802    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:12:56.022872    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:12:56.022953    6816 buildroot.go:174] setting up certificates
	I0307 23:12:56.022997    6816 provision.go:84] configureAuth start
	I0307 23:12:56.023069    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:57.817086    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:57.817086    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:57.817200    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:59.919167    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:59.929688    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:59.929688    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:01.729474    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:01.733698    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:01.733698    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:03.866168    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:03.866168    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:03.866168    6816 provision.go:143] copyHostCerts
	I0307 23:13:03.876430    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:13:03.876607    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:13:03.876607    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:13:03.877172    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:13:03.878554    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:13:03.878768    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:13:03.878835    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:13:03.878967    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:13:03.879760    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:13:03.880285    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:13:03.880285    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:13:03.880413    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:13:03.881793    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400 san=[127.0.0.1 172.20.58.169 ha-792400 localhost minikube]
	I0307 23:13:04.084089    6816 provision.go:177] copyRemoteCerts
	I0307 23:13:04.107922    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:13:04.107922    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:05.913692    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:05.923745    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:05.923745    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:08.096603    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:08.096603    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:08.107950    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:08.208411    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1004505s)
	I0307 23:13:08.208411    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:13:08.209363    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0307 23:13:08.248004    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:13:08.248004    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 23:13:08.288127    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:13:08.288127    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:13:08.326817    6816 provision.go:87] duration metric: took 12.3036685s to configureAuth
	I0307 23:13:08.326919    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:13:08.327541    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:13:08.327646    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:10.078604    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:10.088591    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:10.088591    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:12.193804    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:12.193804    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:12.208519    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:12.209265    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:12.209265    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:13:12.327525    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:13:12.327525    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:13:12.327843    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:13:12.327933    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:14.078367    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:14.078367    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:14.088535    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:16.199916    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:16.199916    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:16.204529    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:16.205229    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:16.205229    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:13:16.346259    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:13:16.346399    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:18.101170    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:18.101170    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:18.101663    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:20.202574    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:20.212237    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:20.217283    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:20.217283    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:20.217283    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:13:21.247313    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:13:21.247313    6816 machine.go:97] duration metric: took 37.4597052s to provisionDockerMachine
	I0307 23:13:21.247313    6816 client.go:171] duration metric: took 1m38.1388065s to LocalClient.Create
	I0307 23:13:21.247313    6816 start.go:167] duration metric: took 1m38.1388065s to libmachine.API.Create "ha-792400"
	I0307 23:13:21.247313    6816 start.go:293] postStartSetup for "ha-792400" (driver="hyperv")
	I0307 23:13:21.247313    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:13:21.258925    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:13:21.258925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:23.012360    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:23.022906    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:23.023018    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:25.142571    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:25.142571    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:25.153620    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:25.249260    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (3.9902976s)
	I0307 23:13:25.261757    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:13:25.267450    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:13:25.267538    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:13:25.267538    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:13:25.268276    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:13:25.268276    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:13:25.278228    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:13:25.296329    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:13:25.333880    6816 start.go:296] duration metric: took 4.0865291s for postStartSetup
	I0307 23:13:25.336578    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:27.092752    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:27.092752    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:27.102159    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:29.277102    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:29.277102    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:29.277348    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:13:29.279822    6816 start.go:128] duration metric: took 1m46.1743186s to createHost
	I0307 23:13:29.279945    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:31.033942    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:31.044144    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:31.044263    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:33.192048    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:33.202085    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:33.206574    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:33.207195    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:33.207195    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:13:33.325040    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853213.334679515
	
	I0307 23:13:33.325040    6816 fix.go:216] guest clock: 1709853213.334679515
	I0307 23:13:33.325040    6816 fix.go:229] Guest: 2024-03-07 23:13:33.334679515 +0000 UTC Remote: 2024-03-07 23:13:29.279945 +0000 UTC m=+110.991515101 (delta=4.054734515s)
	I0307 23:13:33.325040    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:35.062598    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:35.062598    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:35.072444    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:37.201609    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:37.211461    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:37.216395    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:37.217074    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:37.217074    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853213
	I0307 23:13:37.346236    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:13:33 UTC 2024
	
	I0307 23:13:37.346292    6816 fix.go:236] clock set: Thu Mar  7 23:13:33 UTC 2024
	 (err=<nil>)
	I0307 23:13:37.346292    6816 start.go:83] releasing machines lock for "ha-792400", held for 1m54.2408316s
	I0307 23:13:37.346423    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:41.286304    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:41.286377    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:41.290287    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:13:41.290361    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:41.301792    6816 ssh_runner.go:195] Run: cat /version.json
	I0307 23:13:41.301792    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:43.276014    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:43.276014    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:43.276251    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:45.586568    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:45.596181    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:45.596546    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:45.614837    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:45.617399    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:45.617480    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:45.756146    6816 ssh_runner.go:235] Completed: cat /version.json: (4.4543118s)
	I0307 23:13:45.756146    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4657425s)
	I0307 23:13:45.768143    6816 ssh_runner.go:195] Run: systemctl --version
	I0307 23:13:45.786269    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 23:13:45.794327    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:13:45.803784    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:13:45.827163    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:13:45.827163    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:13:45.827472    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:13:45.864234    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:13:45.890427    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:13:45.907042    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:13:45.917584    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:13:45.943439    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:13:45.970338    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:13:45.999221    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:13:46.025444    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:13:46.053524    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:13:46.081965    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:13:46.107589    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:13:46.134300    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:46.290599    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:13:46.317756    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:13:46.327291    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:13:46.359169    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:13:46.390019    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:13:46.419504    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:13:46.450319    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:13:46.479165    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:13:46.533953    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:13:46.552603    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:13:46.591865    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:13:46.607007    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:13:46.622602    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:13:46.656531    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:13:46.832794    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:13:46.970730    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:13:46.971054    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:13:47.007830    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:47.183904    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:13:48.682406    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4984878s)
	I0307 23:13:48.692307    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:13:48.725390    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:13:48.756027    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:13:48.926521    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:13:49.096696    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:49.261907    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:13:49.297574    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:13:49.327032    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:49.492147    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:13:49.578133    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:13:49.591295    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:13:49.599325    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:13:49.609641    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:13:49.624537    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:13:49.685430    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:13:49.693286    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:13:49.734500    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:13:49.764422    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:13:49.764422    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:13:49.771439    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:13:49.771439    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:13:49.777645    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:13:49.785152    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:13:49.812482    6816 kubeadm.go:877] updating cluster {Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 23:13:49.812482    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:13:49.821469    6816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:13:49.842626    6816 docker.go:685] Got preloaded images: 
	I0307 23:13:49.842626    6816 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 23:13:49.853493    6816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 23:13:49.884585    6816 ssh_runner.go:195] Run: which lz4
	I0307 23:13:49.890145    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0307 23:13:49.899438    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 23:13:49.905880    6816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 23:13:49.906006    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0307 23:13:52.205762    6816 docker.go:649] duration metric: took 2.315099s to copy over tarball
	I0307 23:13:52.215654    6816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 23:14:02.550024    6816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.3342156s)
	I0307 23:14:02.550078    6816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 23:14:02.611691    6816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 23:14:02.628443    6816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 23:14:02.665335    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:14:02.824847    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:14:05.048317    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2233337s)
	I0307 23:14:05.056611    6816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:14:05.084587    6816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 23:14:05.084587    6816 cache_images.go:84] Images are preloaded, skipping loading
	I0307 23:14:05.084587    6816 kubeadm.go:928] updating node { 172.20.58.169 8443 v1.28.4 docker true true} ...
	I0307 23:14:05.084587    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.58.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:14:05.095197    6816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 23:14:05.127094    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:14:05.127148    6816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 23:14:05.127232    6816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 23:14:05.127322    6816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.58.169 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792400 NodeName:ha-792400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.58.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.58.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 23:14:05.127541    6816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.58.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-792400"
	  kubeletExtraArgs:
	    node-ip: 172.20.58.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.58.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 23:14:05.127541    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:14:05.127541    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:14:05.138269    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:14:05.152709    6816 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 23:14:05.163146    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0307 23:14:05.176907    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0307 23:14:05.201950    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:14:05.234890    6816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0307 23:14:05.261326    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:14:05.296249    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:14:05.299531    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:14:05.328459    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:14:05.480656    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:14:05.503011    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.58.169
	I0307 23:14:05.503011    6816 certs.go:194] generating shared ca certs ...
	I0307 23:14:05.503112    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.503303    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:14:05.504213    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:14:05.504554    6816 certs.go:256] generating profile certs ...
	I0307 23:14:05.505468    6816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:14:05.505727    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt with IP's: []
	I0307 23:14:05.765614    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt ...
	I0307 23:14:05.765614    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt: {Name:mk2eea3648a63e5ca5595a6e8e677d21f3c19bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.772246    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key ...
	I0307 23:14:05.772246    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key: {Name:mkb2a78624bba117cfb5b07a7e10b0d36117f24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.773120    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409
	I0307 23:14:05.774137    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.63.254]
	I0307 23:14:05.848810    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 ...
	I0307 23:14:05.848810    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409: {Name:mk867b12391832dd101173d28ada253452002c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.856919    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409 ...
	I0307 23:14:05.856919    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409: {Name:mk82310bcbd37aec0078deb26f85b7bb3c1ec537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.856919    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:14:05.859219    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:14:05.868200    6816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:14:05.868200    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt with IP's: []
	I0307 23:14:06.146600    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt ...
	I0307 23:14:06.146600    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt: {Name:mka66b41c9bd0c49bfa9652075c50a9e4f19325d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:06.150510    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key ...
	I0307 23:14:06.150510    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key: {Name:mk5b5c5bc1a9b79de3e7b4b4d8fc04996f0e924f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:06.151831    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:14:06.152936    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:14:06.153140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:14:06.153385    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:14:06.156945    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:14:06.161827    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:14:06.162560    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:14:06.162560    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:14:06.163446    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:14:06.163446    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.164046    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.164184    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.164328    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:14:06.204918    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:14:06.247799    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:14:06.287712    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:14:06.325095    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0307 23:14:06.361866    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 23:14:06.401190    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:14:06.437684    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:14:06.474974    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:14:06.514702    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:14:06.554044    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:14:06.600945    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 23:14:06.639813    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:14:06.656337    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:14:06.689513    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.696805    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.706132    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.722940    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:14:06.754742    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:14:06.783578    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.789740    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.800381    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.820037    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:14:06.845681    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:14:06.873141    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.879435    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.890463    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.910601    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:14:06.937887    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:14:06.944603    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:14:06.944941    6816 kubeadm.go:391] StartCluster: {Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:14:06.953439    6816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 23:14:06.985616    6816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 23:14:07.012715    6816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 23:14:07.038206    6816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 23:14:07.053677    6816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 23:14:07.053764    6816 kubeadm.go:156] found existing configuration files:
	
	I0307 23:14:07.065239    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 23:14:07.078895    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 23:14:07.091353    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 23:14:07.116598    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 23:14:07.132755    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 23:14:07.143958    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 23:14:07.169743    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 23:14:07.184354    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 23:14:07.194962    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 23:14:07.221942    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 23:14:07.239432    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 23:14:07.252044    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 23:14:07.267389    6816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 23:14:07.691062    6816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 23:14:20.400610    6816 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 23:14:20.400842    6816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 23:14:20.400936    6816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 23:14:20.400936    6816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 23:14:20.401548    6816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 23:14:20.401812    6816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 23:14:20.405108    6816 out.go:204]   - Generating certificates and keys ...
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 23:14:20.405939    6816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 23:14:20.406140    6816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 23:14:20.406314    6816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 23:14:20.406352    6816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-792400 localhost] and IPs [172.20.58.169 127.0.0.1 ::1]
	I0307 23:14:20.406352    6816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 23:14:20.406883    6816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-792400 localhost] and IPs [172.20.58.169 127.0.0.1 ::1]
	I0307 23:14:20.407173    6816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 23:14:20.407322    6816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 23:14:20.407853    6816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 23:14:20.408004    6816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 23:14:20.408165    6816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 23:14:20.408391    6816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 23:14:20.408391    6816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 23:14:20.414321    6816 out.go:204]   - Booting up control plane ...
	I0307 23:14:20.414651    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 23:14:20.415533    6816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 23:14:20.415639    6816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 23:14:20.415639    6816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 23:14:20.415639    6816 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.585889 seconds
	I0307 23:14:20.416439    6816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 23:14:20.416439    6816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 23:14:20.416439    6816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 23:14:20.417228    6816 kubeadm.go:309] [mark-control-plane] Marking the node ha-792400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 23:14:20.417228    6816 kubeadm.go:309] [bootstrap-token] Using token: dqdu0z.9ukmcum3jye837js
	I0307 23:14:20.419595    6816 out.go:204]   - Configuring RBAC rules ...
	I0307 23:14:20.421713    6816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 23:14:20.421980    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 23:14:20.422272    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 23:14:20.422662    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 23:14:20.422942    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 23:14:20.422942    6816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 23:14:20.422942    6816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 23:14:20.422942    6816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 23:14:20.422942    6816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 23:14:20.422942    6816 kubeadm.go:309] 
	I0307 23:14:20.422942    6816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 23:14:20.422942    6816 kubeadm.go:309] 
	I0307 23:14:20.424056    6816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 23:14:20.424126    6816 kubeadm.go:309] 
	I0307 23:14:20.424171    6816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 23:14:20.424271    6816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 23:14:20.424537    6816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 23:14:20.424537    6816 kubeadm.go:309] 
	I0307 23:14:20.424640    6816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 23:14:20.424698    6816 kubeadm.go:309] 
	I0307 23:14:20.424698    6816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 23:14:20.424698    6816 kubeadm.go:309] 
	I0307 23:14:20.424698    6816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 23:14:20.424698    6816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 23:14:20.425398    6816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 23:14:20.425398    6816 kubeadm.go:309] 
	I0307 23:14:20.425570    6816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 23:14:20.425730    6816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 23:14:20.425730    6816 kubeadm.go:309] 
	I0307 23:14:20.425954    6816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dqdu0z.9ukmcum3jye837js \
	I0307 23:14:20.426178    6816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0307 23:14:20.426178    6816 kubeadm.go:309] 	--control-plane 
	I0307 23:14:20.426400    6816 kubeadm.go:309] 
	I0307 23:14:20.426462    6816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 23:14:20.426462    6816 kubeadm.go:309] 
	I0307 23:14:20.426462    6816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dqdu0z.9ukmcum3jye837js \
	I0307 23:14:20.426462    6816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0307 23:14:20.427005    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:14:20.427005    6816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 23:14:20.427658    6816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 23:14:20.434145    6816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 23:14:20.449195    6816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 23:14:20.449254    6816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 23:14:20.518865    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 23:14:21.730622    6816 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2117461s)
	I0307 23:14:21.730622    6816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 23:14:21.752041    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400 minikube.k8s.io/updated_at=2024_03_07T23_14_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=true
	I0307 23:14:21.752041    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:21.773281    6816 ops.go:34] apiserver oom_adj: -16
	I0307 23:14:21.931549    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:22.442053    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:22.944573    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:23.447513    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:23.942422    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:24.431993    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:24.932618    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:25.435650    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:25.946609    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:26.441602    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:26.930610    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:27.440213    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:27.934655    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:28.440075    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:28.945351    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:29.432042    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:29.948055    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:30.433600    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:30.941225    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:31.434516    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:31.939893    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:32.437839    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:32.944464    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:33.446697    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:33.655610    6816 kubeadm.go:1106] duration metric: took 11.9248398s to wait for elevateKubeSystemPrivileges
	W0307 23:14:33.655706    6816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 23:14:33.655706    6816 kubeadm.go:393] duration metric: took 26.7105127s to StartCluster
	I0307 23:14:33.655706    6816 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:33.655706    6816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:14:33.657520    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:33.659437    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 23:14:33.659510    6816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 23:14:33.659673    6816 addons.go:69] Setting storage-provisioner=true in profile "ha-792400"
	I0307 23:14:33.659437    6816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:14:33.659872    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:14:33.659787    6816 addons.go:234] Setting addon storage-provisioner=true in "ha-792400"
	I0307 23:14:33.659787    6816 addons.go:69] Setting default-storageclass=true in profile "ha-792400"
	I0307 23:14:33.659904    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:14:33.659904    6816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792400"
	I0307 23:14:33.659904    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:14:33.660599    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:33.661389    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:33.867642    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 23:14:34.425892    6816 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0307 23:14:35.740278    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:35.740278    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:35.746244    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:14:35.747445    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 23:14:35.748858    6816 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 23:14:35.748858    6816 addons.go:234] Setting addon default-storageclass=true in "ha-792400"
	I0307 23:14:35.748858    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:14:35.750089    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:35.758592    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:35.758592    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:35.763580    6816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 23:14:35.766442    6816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:14:35.766524    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 23:14:35.766593    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:37.894961    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:37.898775    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:37.899015    6816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 23:14:37.899054    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 23:14:37.899091    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:37.958866    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:37.958866    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:37.970527    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:14:40.569173    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:14:40.571244    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:40.571748    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:14:40.729880    6816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:14:42.386869    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:14:42.395670    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:42.396080    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:14:42.518775    6816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 23:14:42.765388    6816 round_trippers.go:463] GET https://172.20.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0307 23:14:42.765388    6816 round_trippers.go:469] Request Headers:
	I0307 23:14:42.765982    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:14:42.766034    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:14:42.778156    6816 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0307 23:14:42.781037    6816 round_trippers.go:463] PUT https://172.20.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0307 23:14:42.781107    6816 round_trippers.go:469] Request Headers:
	I0307 23:14:42.781107    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:14:42.781179    6816 round_trippers.go:473]     Content-Type: application/json
	I0307 23:14:42.781179    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:14:42.784394    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:14:42.793319    6816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 23:14:42.795356    6816 addons.go:505] duration metric: took 9.1358319s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0307 23:14:42.795875    6816 start.go:245] waiting for cluster config update ...
	I0307 23:14:42.795875    6816 start.go:254] writing updated cluster config ...
	I0307 23:14:42.802204    6816 out.go:177] 
	I0307 23:14:42.807394    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:14:42.807394    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:14:42.810675    6816 out.go:177] * Starting "ha-792400-m02" control-plane node in "ha-792400" cluster
	I0307 23:14:42.817484    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:14:42.817484    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:14:42.818604    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:14:42.818668    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:14:42.818668    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:14:42.821630    6816 start.go:360] acquireMachinesLock for ha-792400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:14:42.821630    6816 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-792400-m02"
	I0307 23:14:42.821630    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:14:42.822280    6816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0307 23:14:42.825022    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:14:42.825721    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:14:42.825778    6816 client.go:168] LocalClient.Create starting
	I0307 23:14:42.825778    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:14:42.826307    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:14:42.826307    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:14:42.826586    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:14:42.826894    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:14:42.826894    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:14:42.827018    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:14:46.165702    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:14:46.165702    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:46.170920    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:14:47.544367    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:14:47.544367    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:47.552645    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:14:50.698879    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:14:50.698879    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:50.701118    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:14:51.196754    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:14:51.340808    6816 main.go:141] libmachine: Creating VM...
	I0307 23:14:51.340808    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:14:53.917171    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:14:53.917171    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:53.927415    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:14:53.927531    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:14:55.467393    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:14:55.474251    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:55.474251    6816 main.go:141] libmachine: Creating VHD
	I0307 23:14:55.474251    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:14:58.864246    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 73042B3D-DA9F-4F61-85B6-78EDA780FF77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:14:58.864384    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:58.864438    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:14:58.864503    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:14:58.873715    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:15:01.812254    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:01.823422    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:01.823422    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd' -SizeBytes 20000MB
	I0307 23:15:04.160569    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:04.170334    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:04.170334    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:15:07.381123    6816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-792400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:15:07.392760    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:07.393051    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400-m02 -DynamicMemoryEnabled $false
	I0307 23:15:09.345142    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:09.354755    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:09.354953    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400-m02 -Count 2
	I0307 23:15:11.252413    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:11.263090    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:11.263090    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\boot2docker.iso'
	I0307 23:15:13.531824    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:13.543517    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:13.543628    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd'
	I0307 23:15:15.876853    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:15.876853    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:15.876853    6816 main.go:141] libmachine: Starting VM...
	I0307 23:15:15.877088    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m02
	I0307 23:15:18.644663    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:18.654809    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:18.654809    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:15:18.654873    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:20.721909    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:20.722463    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:20.722463    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:22.966802    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:22.966802    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:23.982521    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:25.980713    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:25.980713    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:25.984996    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:28.271559    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:28.271559    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:29.283950    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:33.571679    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:33.572186    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:34.574478    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:38.884082    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:38.884082    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:39.885882    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:42.004607    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:42.004708    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:42.004766    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:44.371640    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:44.371640    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:44.371742    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:46.347939    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:46.347939    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:46.347939    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:15:46.349005    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:48.319286    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:48.320206    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:48.320206    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:50.703007    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:50.703314    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:50.708067    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:50.708860    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:50.708860    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:15:50.844556    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:15:50.844556    6816 buildroot.go:166] provisioning hostname "ha-792400-m02"
	I0307 23:15:50.844556    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:52.799066    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:52.799392    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:52.799508    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:55.179428    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:55.179617    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:55.185033    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:55.185169    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:55.185169    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400-m02 && echo "ha-792400-m02" | sudo tee /etc/hostname
	I0307 23:15:55.345388    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m02
	
	I0307 23:15:55.345388    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:57.321881    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:57.321881    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:57.322386    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:59.714125    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:59.715122    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:59.720428    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:59.721066    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:59.721066    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:15:59.866659    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:15:59.866659    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:15:59.866659    6816 buildroot.go:174] setting up certificates
	I0307 23:15:59.866659    6816 provision.go:84] configureAuth start
	I0307 23:15:59.866659    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:04.223265    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:04.223265    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:04.223412    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:06.195086    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:06.195086    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:06.195844    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:08.542880    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:08.542880    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:08.542880    6816 provision.go:143] copyHostCerts
	I0307 23:16:08.543995    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:16:08.544211    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:16:08.544281    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:16:08.544617    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:16:08.545699    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:16:08.546142    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:16:08.546142    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:16:08.546499    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:16:08.547411    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:16:08.547701    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:16:08.547806    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:16:08.548003    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:16:08.549065    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m02 san=[127.0.0.1 172.20.50.199 ha-792400-m02 localhost minikube]
	I0307 23:16:08.608186    6816 provision.go:177] copyRemoteCerts
	I0307 23:16:08.622165    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:16:08.623159    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:10.580487    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:10.581554    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:10.581649    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:12.891768    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:12.892519    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:12.892919    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:12.993551    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3703511s)
	I0307 23:16:12.993551    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:16:12.993551    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:16:13.036249    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:16:13.036653    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 23:16:13.080724    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:16:13.081128    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 23:16:13.127500    6816 provision.go:87] duration metric: took 13.2607159s to configureAuth
	I0307 23:16:13.127580    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:16:13.127715    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:16:13.127715    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:17.470130    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:17.470202    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:17.474962    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:17.474962    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:17.474962    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:16:17.615256    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:16:17.615256    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:16:17.615256    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:16:17.615256    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:19.538404    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:19.538404    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:19.539285    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:21.844653    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:21.845037    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:21.850386    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:21.850548    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:21.850548    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.58.169"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:16:22.016951    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.58.169
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:16:22.016951    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:24.033359    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:24.034400    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:24.034612    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:26.408882    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:26.408948    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:26.413873    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:26.414389    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:26.414454    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:16:27.528826    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:16:27.528826    6816 machine.go:97] duration metric: took 41.1804995s to provisionDockerMachine
	I0307 23:16:27.528826    6816 client.go:171] duration metric: took 1m44.7020574s to LocalClient.Create
	I0307 23:16:27.528826    6816 start.go:167] duration metric: took 1m44.702114s to libmachine.API.Create "ha-792400"
	I0307 23:16:27.528826    6816 start.go:293] postStartSetup for "ha-792400-m02" (driver="hyperv")
	I0307 23:16:27.528826    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:16:27.544381    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:16:27.544381    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:29.593248    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:29.593317    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:29.593372    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:31.941723    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:31.941723    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:31.942269    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:32.053131    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5087073s)
	I0307 23:16:32.065207    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:16:32.071322    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:16:32.071322    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:16:32.071852    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:16:32.073063    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:16:32.073129    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:16:32.084755    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:16:32.102124    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:16:32.142944    6816 start.go:296] duration metric: took 4.6140738s for postStartSetup
	I0307 23:16:32.146235    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:34.136973    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:34.136973    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:34.137054    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:36.511801    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:36.511854    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:36.511854    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:16:36.514196    6816 start.go:128] duration metric: took 1m53.6908405s to createHost
	I0307 23:16:36.514300    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:38.453391    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:38.453391    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:38.453995    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:40.751969    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:40.751969    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:40.757047    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:40.757047    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:40.757638    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:16:40.892074    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853400.904284684
	
	I0307 23:16:40.892167    6816 fix.go:216] guest clock: 1709853400.904284684
	I0307 23:16:40.892167    6816 fix.go:229] Guest: 2024-03-07 23:16:40.904284684 +0000 UTC Remote: 2024-03-07 23:16:36.5143005 +0000 UTC m=+298.224101001 (delta=4.389984184s)
	I0307 23:16:40.892245    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:42.857446    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:42.857540    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:42.857609    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:45.215799    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:45.216183    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:45.221016    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:45.222059    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:45.222059    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853400
	I0307 23:16:45.367118    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:16:40 UTC 2024
	
	I0307 23:16:45.367118    6816 fix.go:236] clock set: Thu Mar  7 23:16:40 UTC 2024
	 (err=<nil>)
	I0307 23:16:45.367118    6816 start.go:83] releasing machines lock for "ha-792400-m02", held for 2m2.5443291s
	I0307 23:16:45.367414    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:49.636476    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:49.637403    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:49.641195    6816 out.go:177] * Found network options:
	I0307 23:16:49.644200    6816 out.go:177]   - NO_PROXY=172.20.58.169
	W0307 23:16:49.646494    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:16:49.648507    6816 out.go:177]   - NO_PROXY=172.20.58.169
	W0307 23:16:49.651557    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:16:49.652760    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:16:49.654067    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:16:49.655104    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:49.664170    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 23:16:49.664170    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:54.128858    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:54.128858    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:54.129170    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:54.171510    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:54.171576    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:54.171914    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:54.233181    6816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.568865s)
	W0307 23:16:54.233181    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:16:54.244235    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:16:54.348326    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6931783s)
	I0307 23:16:54.348326    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:16:54.348326    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:16:54.348326    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:16:54.393187    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:16:54.421408    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:16:54.438580    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:16:54.447497    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:16:54.477460    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:16:54.504946    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:16:54.533535    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:16:54.562610    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:16:54.591393    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:16:54.623689    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:16:54.653057    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:16:54.682269    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:54.858072    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:16:54.889517    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:16:54.901711    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:16:54.937325    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:16:54.967607    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:16:55.007178    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:16:55.040057    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:16:55.075379    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:16:55.136539    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:16:55.156124    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:16:55.198294    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:16:55.215697    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:16:55.231582    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:16:55.274264    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:16:55.453497    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:16:55.633386    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:16:55.633557    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:16:55.674647    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:55.866144    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:16:57.383151    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5169921s)
	I0307 23:16:57.397670    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:16:57.431508    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:16:57.464067    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:16:57.659537    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:16:57.843349    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:58.034056    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:16:58.072592    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:16:58.104418    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:58.285231    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:16:58.376721    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:16:58.387574    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:16:58.395907    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:16:58.407297    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:16:58.423671    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:16:58.488539    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:16:58.498215    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:16:58.537695    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:16:58.571959    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:16:58.574754    6816 out.go:177]   - env NO_PROXY=172.20.58.169
	I0307 23:16:58.577371    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:16:58.580157    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:16:58.583543    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:16:58.583543    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:16:58.592642    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:16:58.598873    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:16:58.617532    6816 mustload.go:65] Loading cluster: ha-792400
	I0307 23:16:58.618148    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:16:58.618994    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:00.594607    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:00.595152    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:00.595152    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:17:00.595895    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.50.199
	I0307 23:17:00.595895    6816 certs.go:194] generating shared ca certs ...
	I0307 23:17:00.595895    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.596522    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:17:00.596856    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:17:00.596986    6816 certs.go:256] generating profile certs ...
	I0307 23:17:00.597873    6816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:17:00.598046    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7
	I0307 23:17:00.598126    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.50.199 172.20.63.254]
	I0307 23:17:00.709500    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 ...
	I0307 23:17:00.709500    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7: {Name:mk4dc464a636a1c1fc40a8d49a1c49b8951b5d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.711557    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7 ...
	I0307 23:17:00.711557    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7: {Name:mk38eabc37a82b7f04a1b43f06a56e71bc33b402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.711877    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:17:00.724637    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:17:00.725671    6816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:17:00.726636    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:17:00.726636    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:17:00.726636    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:17:00.728765    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:17:00.729026    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:00.729271    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:17:00.729415    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:02.710509    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:02.710578    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:02.710711    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:17:05.051003    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:17:05.051003    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:05.051229    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:17:05.141664    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0307 23:17:05.149827    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0307 23:17:05.178926    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0307 23:17:05.185073    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0307 23:17:05.215054    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0307 23:17:05.222381    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0307 23:17:05.252861    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0307 23:17:05.258764    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0307 23:17:05.285866    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0307 23:17:05.292316    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0307 23:17:05.323712    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0307 23:17:05.329555    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0307 23:17:05.352747    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:17:05.397189    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:17:05.438417    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:17:05.484471    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:17:05.525379    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0307 23:17:05.569157    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 23:17:05.608748    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:17:05.649512    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:17:05.690556    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:17:05.729847    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:17:05.770325    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:17:05.809783    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0307 23:17:05.838711    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0307 23:17:05.867013    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0307 23:17:05.896975    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0307 23:17:05.926118    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0307 23:17:05.954267    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0307 23:17:05.983720    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0307 23:17:06.024558    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:17:06.048151    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:17:06.075656    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.081850    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.092642    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.110914    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:17:06.138687    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:17:06.167218    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.173674    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.182994    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.202128    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:17:06.233724    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:17:06.261771    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.267932    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.277449    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.297246    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:17:06.325969    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:17:06.331875    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:17:06.331875    6816 kubeadm.go:928] updating node {m02 172.20.50.199 8443 v1.28.4 docker true true} ...
	I0307 23:17:06.331875    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.50.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:17:06.332416    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:17:06.332416    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:17:06.342787    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:17:06.357831    6816 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0307 23:17:06.369808    6816 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0307 23:17:06.387661    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0307 23:17:06.387820    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0307 23:17:06.387820    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0307 23:17:07.501992    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:17:07.511846    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:17:07.519798    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0307 23:17:07.519798    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0307 23:17:11.485396    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:17:11.495542    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:17:11.503127    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0307 23:17:11.503247    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0307 23:17:14.882122    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:17:14.905074    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:17:14.916375    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:17:14.923364    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0307 23:17:14.923504    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0307 23:17:15.718085    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0307 23:17:15.734070    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 23:17:15.763051    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:17:15.791514    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:17:15.831045    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:17:15.836141    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:17:15.866883    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:17:16.065483    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:17:16.090409    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:17:16.091639    6816 start.go:316] joinCluster: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:17:16.091757    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 23:17:16.091757    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:18.070104    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:18.070468    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:18.070468    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:17:20.366651    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:17:20.366651    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:20.367193    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:17:20.753158    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6613569s)
	I0307 23:17:20.753158    6816 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:17:20.753158    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0nf1yh.8o5o4jhgw43h1vbc --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m02 --control-plane --apiserver-advertise-address=172.20.50.199 --apiserver-bind-port=8443"
	I0307 23:18:17.416732    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0nf1yh.8o5o4jhgw43h1vbc --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m02 --control-plane --apiserver-advertise-address=172.20.50.199 --apiserver-bind-port=8443": (56.6630422s)
	I0307 23:18:17.416732    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 23:18:18.072127    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400-m02 minikube.k8s.io/updated_at=2024_03_07T23_18_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=false
	I0307 23:18:18.244060    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0307 23:18:18.403934    6816 start.go:318] duration metric: took 1m2.3118557s to joinCluster
	I0307 23:18:18.404299    6816 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:18:18.407534    6816 out.go:177] * Verifying Kubernetes components...
	I0307 23:18:18.405170    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:18:18.424825    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:18:18.759893    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:18:18.791415    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:18:18.792370    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0307 23:18:18.792520    6816 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.58.169:8443
	I0307 23:18:18.793079    6816 node_ready.go:35] waiting up to 6m0s for node "ha-792400-m02" to be "Ready" ...
	I0307 23:18:18.793079    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:18.793079    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:18.793079    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:18.793079    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:18.812144    6816 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0307 23:18:19.300448    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:19.300448    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:19.300448    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:19.300448    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:19.308543    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:18:19.805349    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:19.805349    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:19.805683    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:19.805683    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:19.809974    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:20.299324    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:20.299324    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:20.299324    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:20.299324    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:20.305477    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:20.809465    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:20.810462    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:20.810462    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:20.810462    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:20.839749    6816 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0307 23:18:20.840296    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:21.299083    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:21.299158    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:21.299158    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:21.299201    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:21.306197    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:21.806807    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:21.806885    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:21.806885    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:21.806933    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:21.811286    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:22.298247    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:22.298247    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:22.298247    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:22.298247    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:22.302638    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:22.805523    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:22.805751    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:22.805751    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:22.805751    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:22.811721    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:23.298381    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:23.298611    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:23.298611    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:23.298611    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:23.303503    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:23.304364    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:23.807388    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:23.807417    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:23.807417    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:23.807417    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:23.811917    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:24.297579    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:24.297579    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:24.297579    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:24.297579    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:24.305078    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:24.804203    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:24.804203    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:24.804203    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:24.804203    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:24.962053    6816 round_trippers.go:574] Response Status: 200 OK in 157 milliseconds
	I0307 23:18:25.294110    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:25.294205    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:25.294205    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:25.294205    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:25.298518    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:25.798497    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:25.798563    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:25.798563    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:25.798563    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:25.803508    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:25.803807    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:26.299244    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:26.299400    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:26.299400    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:26.299400    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:26.305210    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:26.804632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:26.804719    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:26.804719    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:26.804719    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:26.809418    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:27.293556    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:27.293556    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:27.293556    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:27.293556    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:27.299138    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:27.799716    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:27.800026    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:27.800026    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:27.800026    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:27.804309    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:27.804309    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:28.306790    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.306859    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.306859    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.306859    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.311110    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:28.809393    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.809393    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.809393    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.809393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.815222    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.816229    6816 node_ready.go:49] node "ha-792400-m02" has status "Ready":"True"
	I0307 23:18:28.816302    6816 node_ready.go:38] duration metric: took 10.0230559s for node "ha-792400-m02" to be "Ready" ...
	I0307 23:18:28.816302    6816 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:18:28.816464    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:28.816464    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.816464    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.816464    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.826257    6816 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 23:18:28.835651    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.835651    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-28rtr
	I0307 23:18:28.835651    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.835651    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.835651    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.840206    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:28.840500    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.840500    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.840500    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.840500    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.847020    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:28.847197    6816 pod_ready.go:92] pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.847197    6816 pod_ready.go:81] duration metric: took 11.5461ms for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.847197    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.847750    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rx9dg
	I0307 23:18:28.847750    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.847750    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.847750    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.854481    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:28.855198    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.855198    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.855198    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.855198    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.858552    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:18:28.859485    6816 pod_ready.go:92] pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.859485    6816 pod_ready.go:81] duration metric: took 12.2877ms for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.859485    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.859485    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400
	I0307 23:18:28.859485    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.859485    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.859485    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.864801    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.865971    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.866000    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.866000    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.866038    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.871041    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.871667    6816 pod_ready.go:92] pod "etcd-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.871667    6816 pod_ready.go:81] duration metric: took 12.1818ms for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.871667    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.871667    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m02
	I0307 23:18:28.871667    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.872243    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.872243    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.880060    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:28.880880    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.880880    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.880880    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.880880    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.895937    6816 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0307 23:18:28.895937    6816 pod_ready.go:92] pod "etcd-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.895937    6816 pod_ready.go:81] duration metric: took 24.2699ms for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.895937    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.014093    6816 request.go:629] Waited for 118.1545ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:18:29.014369    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:18:29.014471    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.014471    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.014471    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.020691    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:29.218543    6816 request.go:629] Waited for 196.9149ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:29.218634    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:29.218634    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.218714    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.218714    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.223468    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:29.224513    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:29.224513    6816 pod_ready.go:81] duration metric: took 328.5728ms for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.224513    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.421312    6816 request.go:629] Waited for 196.5754ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:18:29.421545    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:18:29.421610    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.421610    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.421610    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.427154    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:29.623335    6816 request.go:629] Waited for 194.9154ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:29.623434    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:29.623434    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.623434    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.623434    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.626389    6816 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 23:18:29.628293    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:29.628293    6816 pod_ready.go:81] duration metric: took 403.7763ms for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.628293    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.812274    6816 request.go:629] Waited for 183.6964ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:18:29.812417    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:18:29.812456    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.812456    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.812456    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.817273    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.018485    6816 request.go:629] Waited for 199.9866ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:30.018627    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:30.018627    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.018627    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.018627    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.026078    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:30.027375    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.027466    6816 pod_ready.go:81] duration metric: took 399.1691ms for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.027466    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.222541    6816 request.go:629] Waited for 194.5931ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:18:30.222628    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:18:30.222628    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.222628    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.222628    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.227421    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.410818    6816 request.go:629] Waited for 181.4497ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.410867    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.410867    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.410867    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.410867    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.415499    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.417054    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.417054    6816 pod_ready.go:81] duration metric: took 389.5842ms for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.417054    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.614134    6816 request.go:629] Waited for 196.8574ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:18:30.614134    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:18:30.614134    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.614134    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.614393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.618431    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.815276    6816 request.go:629] Waited for 194.2966ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.815276    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.815276    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.815540    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.815659    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.820239    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.821389    6816 pod_ready.go:92] pod "kube-proxy-j6wd5" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.821389    6816 pod_ready.go:81] duration metric: took 404.3317ms for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.821389    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.016458    6816 request.go:629] Waited for 194.8769ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:18:31.016458    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:18:31.016458    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.016458    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.016822    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.022644    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:31.219281    6816 request.go:629] Waited for 195.6742ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.219372    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.219585    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.219585    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.219585    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.224391    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:31.224987    6816 pod_ready.go:92] pod "kube-proxy-zxmcc" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:31.224987    6816 pod_ready.go:81] duration metric: took 403.4887ms for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.224987    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.421342    6816 request.go:629] Waited for 196.3531ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:18:31.421342    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:18:31.421342    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.421342    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.421342    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.426405    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:31.611193    6816 request.go:629] Waited for 183.7264ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.611283    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.611283    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.611495    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.611495    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.618314    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:31.618797    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:31.618797    6816 pod_ready.go:81] duration metric: took 393.8062ms for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.619395    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.813892    6816 request.go:629] Waited for 194.2507ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:18:31.813982    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:18:31.813982    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.813982    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.814136    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.820851    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:32.017806    6816 request.go:629] Waited for 196.009ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:32.017966    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:32.017966    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.017966    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.017966    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.022199    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.023311    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:32.023402    6816 pod_ready.go:81] duration metric: took 403.9719ms for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:32.023402    6816 pod_ready.go:38] duration metric: took 3.2070692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:18:32.023402    6816 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:18:32.035197    6816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:18:32.062836    6816 api_server.go:72] duration metric: took 13.6583094s to wait for apiserver process to appear ...
	I0307 23:18:32.062836    6816 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:18:32.062922    6816 api_server.go:253] Checking apiserver healthz at https://172.20.58.169:8443/healthz ...
	I0307 23:18:32.070969    6816 api_server.go:279] https://172.20.58.169:8443/healthz returned 200:
	ok
	I0307 23:18:32.071308    6816 round_trippers.go:463] GET https://172.20.58.169:8443/version
	I0307 23:18:32.071308    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.071308    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.071308    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.073096    6816 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:18:32.073732    6816 api_server.go:141] control plane version: v1.28.4
	I0307 23:18:32.073782    6816 api_server.go:131] duration metric: took 10.8595ms to wait for apiserver health ...
	I0307 23:18:32.073850    6816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:18:32.220678    6816 request.go:629] Waited for 146.7393ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.220883    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.220883    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.220883    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.220883    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.228173    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:32.233605    6816 system_pods.go:59] 17 kube-system pods found
	I0307 23:18:32.234185    6816 system_pods.go:61] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:18:32.234348    6816 system_pods.go:61] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:18:32.234348    6816 system_pods.go:74] duration metric: took 160.4484ms to wait for pod list to return data ...
	I0307 23:18:32.234348    6816 default_sa.go:34] waiting for default service account to be created ...
	I0307 23:18:32.424721    6816 request.go:629] Waited for 190.128ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:18:32.424721    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:18:32.424721    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.424721    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.424721    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.429359    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.429800    6816 default_sa.go:45] found service account: "default"
	I0307 23:18:32.429899    6816 default_sa.go:55] duration metric: took 195.4502ms for default service account to be created ...
	I0307 23:18:32.429899    6816 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 23:18:32.612348    6816 request.go:629] Waited for 182.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.612416    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.612416    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.612553    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.612608    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.620007    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:32.626443    6816 system_pods.go:86] 17 kube-system pods found
	I0307 23:18:32.626443    6816 system_pods.go:89] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:18:32.626443    6816 system_pods.go:126] duration metric: took 196.5429ms to wait for k8s-apps to be running ...
	I0307 23:18:32.626443    6816 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 23:18:32.636205    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:18:32.660703    6816 system_svc.go:56] duration metric: took 34.2594ms WaitForService to wait for kubelet
	I0307 23:18:32.660703    6816 kubeadm.go:576] duration metric: took 14.2561706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:18:32.660826    6816 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:18:32.816160    6816 request.go:629] Waited for 155.2814ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes
	I0307 23:18:32.816433    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes
	I0307 23:18:32.816433    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.816515    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.816534    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.821312    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.822345    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:18:32.822345    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:18:32.822345    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:18:32.822345    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:18:32.822345    6816 node_conditions.go:105] duration metric: took 161.5169ms to run NodePressure ...
	I0307 23:18:32.822345    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:18:32.822345    6816 start.go:254] writing updated cluster config ...
	I0307 23:18:32.828095    6816 out.go:177] 
	I0307 23:18:32.838120    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:18:32.838120    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:18:32.845166    6816 out.go:177] * Starting "ha-792400-m03" control-plane node in "ha-792400" cluster
	I0307 23:18:32.847373    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:18:32.847373    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:18:32.847892    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:18:32.848072    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:18:32.848316    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:18:32.855116    6816 start.go:360] acquireMachinesLock for ha-792400-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:18:32.855116    6816 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-792400-m03"
	I0307 23:18:32.855116    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:18:32.856034    6816 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0307 23:18:32.860028    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:18:32.861048    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:18:32.861048    6816 client.go:168] LocalClient.Create starting
	I0307 23:18:32.861048    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:18:32.861048    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:18:34.674427    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:18:34.675322    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:34.675392    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:18:36.332405    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:18:36.332405    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:36.332524    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:18:37.745919    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:18:37.746187    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:37.746187    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:18:41.213730    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:18:41.213730    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:41.215881    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:18:41.691941    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:18:41.918056    6816 main.go:141] libmachine: Creating VM...
	I0307 23:18:41.918056    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:18:44.648200    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:18:44.649036    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:44.649036    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:18:44.649166    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:18:46.332251    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:18:46.332251    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:46.333265    6816 main.go:141] libmachine: Creating VHD
	I0307 23:18:46.333314    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:18:49.891571    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 984A58C8-77D7-44BA-AC0B-7F6204C11272
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:18:49.892312    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:49.892365    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:18:49.892365    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:18:49.901638    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd' -SizeBytes 20000MB
	I0307 23:18:55.420715    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:18:55.420853    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:55.420853    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:18:58.862541    6816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-792400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:18:58.863445    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:58.863445    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400-m03 -DynamicMemoryEnabled $false
	I0307 23:19:00.997697    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:00.997697    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:00.998370    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400-m03 -Count 2
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\boot2docker.iso'
	I0307 23:19:05.522516    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:05.522516    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:05.522792    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd'
	I0307 23:19:08.003610    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:08.004205    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:08.004205    6816 main.go:141] libmachine: Starting VM...
	I0307 23:19:08.004205    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m03
	I0307 23:19:10.892429    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:10.892500    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:10.892500    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:19:10.892500    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:13.060020    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:13.060020    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:13.061007    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:15.442274    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:15.442274    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:16.456925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:18.543788    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:18.544171    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:18.544243    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:20.916663    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:20.916663    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:21.928270    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:24.023630    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:24.023630    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:24.023852    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:26.396845    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:26.396845    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:27.405150    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:29.508296    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:29.508296    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:29.509031    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:31.880174    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:31.880527    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:32.889719    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:35.016152    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:35.016401    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:35.016401    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:37.423741    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:37.423741    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:37.424261    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:39.423918    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:39.424153    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:39.424153    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:19:39.424278    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:43.894490    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:43.894490    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:43.899764    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:43.899906    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:43.899906    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:19:44.016925    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:19:44.016925    6816 buildroot.go:166] provisioning hostname "ha-792400-m03"
	I0307 23:19:44.016925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:46.030349    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:46.030349    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:46.031073    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:48.458139    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:48.458139    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:48.463374    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:48.463872    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:48.463872    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400-m03 && echo "ha-792400-m03" | sudo tee /etc/hostname
	I0307 23:19:48.610866    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m03
	
	I0307 23:19:48.610980    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:50.643348    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:50.644204    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:50.644265    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:53.045406    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:53.045554    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:53.050577    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:53.050745    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:53.050745    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:19:53.182421    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:19:53.182421    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:19:53.182421    6816 buildroot.go:174] setting up certificates
	I0307 23:19:53.182421    6816 provision.go:84] configureAuth start
	I0307 23:19:53.182421    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:55.202100    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:55.202100    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:55.202351    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:02.046894    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:02.046894    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:02.046894    6816 provision.go:143] copyHostCerts
	I0307 23:20:02.046894    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:20:02.046894    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:20:02.046894    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:20:02.047548    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:20:02.049370    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:20:02.049487    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:20:02.049487    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:20:02.049487    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:20:02.050730    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:20:02.051012    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:20:02.051040    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:20:02.051385    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:20:02.051904    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m03 san=[127.0.0.1 172.20.59.36 ha-792400-m03 localhost minikube]
	I0307 23:20:02.191375    6816 provision.go:177] copyRemoteCerts
	I0307 23:20:02.203349    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:20:02.203349    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:06.623182    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:06.623636    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:06.623636    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:06.732081    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.528629s)
	I0307 23:20:06.732081    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:20:06.732081    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:20:06.778770    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:20:06.778839    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 23:20:06.823400    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:20:06.823739    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 23:20:06.869051    6816 provision.go:87] duration metric: took 13.6864993s to configureAuth
	I0307 23:20:06.869123    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:20:06.869727    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:20:06.869823    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:08.867202    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:08.867202    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:08.867526    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:11.258599    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:11.259041    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:11.264316    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:11.264316    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:11.264316    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:20:11.386983    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:20:11.386983    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:20:11.387899    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:20:11.387899    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:15.798991    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:15.798991    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:15.804603    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:15.804603    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:15.804603    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.58.169"
	Environment="NO_PROXY=172.20.58.169,172.20.50.199"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:20:15.943601    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.58.169
	Environment=NO_PROXY=172.20.58.169,172.20.50.199
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:20:15.943712    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:18.001151    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:18.001762    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:18.001762    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:20.445742    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:20.445878    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:20.450689    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:20.451437    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:20.451437    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:20:21.619025    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:20:21.619025    6816 machine.go:97] duration metric: took 42.1944716s to provisionDockerMachine
	I0307 23:20:21.619025    6816 client.go:171] duration metric: took 1m48.7569484s to LocalClient.Create
	I0307 23:20:21.619025    6816 start.go:167] duration metric: took 1m48.7569484s to libmachine.API.Create "ha-792400"
	I0307 23:20:21.619025    6816 start.go:293] postStartSetup for "ha-792400-m03" (driver="hyperv")
	I0307 23:20:21.619025    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:20:21.630707    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:20:21.630707    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:26.030278    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:26.030278    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:26.030278    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:26.136844    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5060952s)
	I0307 23:20:26.147621    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:20:26.154917    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:20:26.154962    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:20:26.155138    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:20:26.155988    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:20:26.155988    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:20:26.167573    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:20:26.186576    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:20:26.231978    6816 start.go:296] duration metric: took 4.6129093s for postStartSetup
	I0307 23:20:26.234775    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:28.229897    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:28.229897    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:28.230816    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:30.614500    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:30.614500    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:30.614990    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:20:30.618284    6816 start.go:128] duration metric: took 1m57.761136s to createHost
	I0307 23:20:30.618445    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:32.606765    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:32.606884    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:32.606884    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:35.014723    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:35.014876    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:35.020837    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:35.020837    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:35.021382    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:20:35.146460    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853635.156036719
	
	I0307 23:20:35.146558    6816 fix.go:216] guest clock: 1709853635.156036719
	I0307 23:20:35.146558    6816 fix.go:229] Guest: 2024-03-07 23:20:35.156036719 +0000 UTC Remote: 2024-03-07 23:20:30.618348 +0000 UTC m=+532.325941501 (delta=4.537688719s)
	I0307 23:20:35.146642    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:39.544643    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:39.544643    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:39.550178    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:39.550852    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:39.550852    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853635
	I0307 23:20:39.688579    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:20:35 UTC 2024
	
	I0307 23:20:39.688579    6816 fix.go:236] clock set: Thu Mar  7 23:20:35 UTC 2024
	 (err=<nil>)
	I0307 23:20:39.688579    6816 start.go:83] releasing machines lock for "ha-792400-m03", held for 2m6.8322641s
	I0307 23:20:39.688579    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:44.075943    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:44.075943    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:44.078644    6816 out.go:177] * Found network options:
	I0307 23:20:44.082240    6816 out.go:177]   - NO_PROXY=172.20.58.169,172.20.50.199
	W0307 23:20:44.086486    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.086486    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:20:44.088999    6816 out.go:177]   - NO_PROXY=172.20.58.169,172.20.50.199
	W0307 23:20:44.091327    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.091327    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.092871    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.092871    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:20:44.095206    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:20:44.095206    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:44.107175    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 23:20:44.107175    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:48.719280    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:48.719348    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:48.719348    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:48.728937    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:48.728937    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:48.728937    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:48.807797    6816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7005777s)
	W0307 23:20:48.807797    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:20:48.818530    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:20:48.873904    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:20:48.873904    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:20:48.873904    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7786528s)
	I0307 23:20:48.874581    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:20:48.920852    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:20:48.951906    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:20:48.969858    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:20:48.979894    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:20:49.006851    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:20:49.039295    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:20:49.067931    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:20:49.097279    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:20:49.131940    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:20:49.162577    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:20:49.189156    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:20:49.217674    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:49.409887    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:20:49.440919    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:20:49.453189    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:20:49.493154    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:20:49.525753    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:20:49.571555    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:20:49.605106    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:20:49.640102    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:20:49.705340    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:20:49.727183    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:20:49.771613    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:20:49.789015    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:20:49.808561    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:20:49.849146    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:20:50.038104    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:20:50.210044    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:20:50.210044    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:20:50.254946    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:50.446876    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:20:51.983412    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5364127s)
	I0307 23:20:51.995408    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:20:52.029079    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:20:52.062863    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:20:52.257988    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:20:52.450714    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:52.643497    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:20:52.683067    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:20:52.716509    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:52.901323    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:20:52.998724    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:20:53.010772    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:20:53.019238    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:20:53.029900    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:20:53.047805    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:20:53.116723    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:20:53.128600    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:20:53.177042    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:20:53.209905    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:20:53.212815    6816 out.go:177]   - env NO_PROXY=172.20.58.169
	I0307 23:20:53.215363    6816 out.go:177]   - env NO_PROXY=172.20.58.169,172.20.50.199
	I0307 23:20:53.217125    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:20:53.221876    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:20:53.221904    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:20:53.221904    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:20:53.221964    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:20:53.224739    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:20:53.224739    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:20:53.236196    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:20:53.241292    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:20:53.262854    6816 mustload.go:65] Loading cluster: ha-792400
	I0307 23:20:53.263524    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:20:53.264239    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:20:55.263553    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:55.263553    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:55.263553    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:20:55.264334    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.59.36
	I0307 23:20:55.264334    6816 certs.go:194] generating shared ca certs ...
	I0307 23:20:55.264334    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.264899    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:20:55.265581    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:20:55.265738    6816 certs.go:256] generating profile certs ...
	I0307 23:20:55.266378    6816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:20:55.266650    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4
	I0307 23:20:55.266755    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.50.199 172.20.59.36 172.20.63.254]
	I0307 23:20:55.424258    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 ...
	I0307 23:20:55.424258    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4: {Name:mk2d7123acb961ebc703db74541faae0d436c001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.426195    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4 ...
	I0307 23:20:55.426195    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4: {Name:mkdaf51f147289c85301dcf4dc53946c27cee5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.426195    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:20:55.439337    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:20:55.441610    6816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:20:55.442174    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:20:55.443011    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:20:55.443140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:20:55.443140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:20:55.443902    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:20:55.443902    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:20:55.444618    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:20:55.444618    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:20:55.444618    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:20:55.445359    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:20:55.445359    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:20:55.445952    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:20:55.446663    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:20:57.440346    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:57.440647    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:57.440743    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:59.871074    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:20:59.871145    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:59.871145    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:20:59.969836    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0307 23:20:59.977471    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0307 23:21:00.010894    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0307 23:21:00.018597    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0307 23:21:00.048674    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0307 23:21:00.056675    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0307 23:21:00.086664    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0307 23:21:00.093706    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0307 23:21:00.125902    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0307 23:21:00.131441    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0307 23:21:00.158033    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0307 23:21:00.164733    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0307 23:21:00.183114    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:21:00.230565    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:21:00.272600    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:21:00.314739    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:21:00.359073    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0307 23:21:00.401038    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 23:21:00.442713    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:21:00.485162    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:21:00.528914    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:21:00.569153    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:21:00.615285    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:21:00.659190    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0307 23:21:00.689317    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0307 23:21:00.721226    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0307 23:21:00.753397    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0307 23:21:00.784124    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0307 23:21:00.815067    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0307 23:21:00.845478    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0307 23:21:00.883965    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:21:00.904106    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:21:00.933299    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.939760    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.952207    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.972055    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:21:01.001634    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:21:01.030725    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.039489    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.051492    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.071519    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:21:01.100256    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:21:01.132018    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.138629    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.150998    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.170130    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:21:01.199638    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:21:01.205585    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:21:01.205585    6816 kubeadm.go:928] updating node {m03 172.20.59.36 8443 v1.28.4 docker true true} ...
	I0307 23:21:01.205585    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:21:01.206117    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:21:01.206264    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:21:01.216793    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:21:01.233719    6816 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0307 23:21:01.244916    6816 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0307 23:21:01.262567    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0307 23:21:01.262699    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0307 23:21:01.262787    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:21:01.262623    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0307 23:21:01.263080    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:21:01.275943    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:21:01.277467    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:21:01.278069    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:21:01.297786    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:21:01.297786    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0307 23:21:01.297786    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0307 23:21:01.297786    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0307 23:21:01.297786    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0307 23:21:01.308806    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:21:01.354304    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0307 23:21:01.354557    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0307 23:21:02.633277    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0307 23:21:02.650757    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0307 23:21:02.681564    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:21:02.715680    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:21:02.761892    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:21:02.767722    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:21:02.799183    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:21:03.003240    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:21:03.034625    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:21:03.035310    6816 start.go:316] joinCluster: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:21:03.035310    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 23:21:03.035310    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:21:07.477710    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:21:07.477710    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:21:07.477710    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:21:07.668386    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6329003s)
	I0307 23:21:07.668386    6816 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:21:07.668386    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cec46d.ea12q4hw7balg83q --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m03 --control-plane --apiserver-advertise-address=172.20.59.36 --apiserver-bind-port=8443"
	I0307 23:21:50.250862    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cec46d.ea12q4hw7balg83q --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m03 --control-plane --apiserver-advertise-address=172.20.59.36 --apiserver-bind-port=8443": (42.5820801s)
	I0307 23:21:50.250992    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 23:21:51.048382    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400-m03 minikube.k8s.io/updated_at=2024_03_07T23_21_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=false
	I0307 23:21:51.232131    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0307 23:21:51.391221    6816 start.go:318] duration metric: took 48.3554605s to joinCluster
	I0307 23:21:51.391221    6816 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:21:51.396217    6816 out.go:177] * Verifying Kubernetes components...
	I0307 23:21:51.392246    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:21:51.410233    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:21:51.767460    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:21:51.803438    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:21:51.804240    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0307 23:21:51.804330    6816 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.58.169:8443
	I0307 23:21:51.805254    6816 node_ready.go:35] waiting up to 6m0s for node "ha-792400-m03" to be "Ready" ...
	I0307 23:21:51.805459    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:51.805509    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:51.805509    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:51.805543    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:51.821751    6816 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0307 23:21:52.309212    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:52.309395    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:52.309395    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:52.309395    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:52.314876    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:52.818244    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:52.818244    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:52.818244    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:52.818244    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:52.823900    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:53.311564    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:53.311791    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:53.311791    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:53.311862    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:53.316772    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:53.815711    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:53.815711    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:53.815711    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:53.815711    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:53.820720    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:53.822018    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:54.305839    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:54.305839    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:54.305839    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:54.305839    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:54.310619    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:54.812466    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:54.812672    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:54.812732    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:54.812732    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:54.817487    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:55.317067    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:55.317152    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:55.317152    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:55.317207    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:55.321571    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:55.807758    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:55.807758    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:55.807758    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:55.807758    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:55.812339    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:56.312273    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:56.312273    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:56.312273    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:56.312273    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:56.320401    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:21:56.322124    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:56.818808    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:56.818808    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:56.818808    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:56.818808    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:56.823213    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:57.308028    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:57.308271    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:57.308271    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:57.308271    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:57.808452    6816 round_trippers.go:574] Response Status: 200 OK in 500 milliseconds
	I0307 23:21:57.810012    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:57.810012    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:57.810012    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:57.810012    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:57.816294    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:21:58.310632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:58.310632    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:58.310632    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:58.310632    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:58.317366    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:21:58.820746    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:58.820746    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:58.820746    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:58.820746    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:58.825490    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:58.826836    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:59.306958    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:59.306958    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:59.306958    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:59.306958    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:59.312533    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:59.807924    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:59.808115    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:59.808115    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:59.808115    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:59.815234    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:00.308331    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:00.308420    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:00.308420    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:00.308420    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:00.313116    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:00.810423    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:00.810631    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:00.810631    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:00.810631    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:00.815913    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:01.308477    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:01.308698    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:01.308698    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:01.308698    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:01.312832    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:01.314210    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:22:01.810640    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:01.810640    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:01.810640    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:01.810640    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:01.815281    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:02.312629    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:02.312758    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:02.312758    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:02.312758    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:02.317145    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:02.817754    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:02.817754    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:02.817754    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:02.817849    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:02.823377    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.306871    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.306871    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.306871    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.306871    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.310514    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.311789    6816 node_ready.go:49] node "ha-792400-m03" has status "Ready":"True"
	I0307 23:22:03.311877    6816 node_ready.go:38] duration metric: took 11.5065144s for node "ha-792400-m03" to be "Ready" ...
	I0307 23:22:03.311877    6816 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:22:03.312054    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:03.312054    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.312054    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.312054    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.322640    6816 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0307 23:22:03.332515    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.332610    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-28rtr
	I0307 23:22:03.332672    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.332672    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.332672    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.337028    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.338111    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.338198    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.338198    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.338198    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.342311    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.342646    6816 pod_ready.go:92] pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.342646    6816 pod_ready.go:81] duration metric: took 10.1305ms for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.342646    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.342646    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rx9dg
	I0307 23:22:03.342646    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.342646    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.343370    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.346417    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.348619    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.348619    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.348619    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.348619    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.352195    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.353287    6816 pod_ready.go:92] pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.353287    6816 pod_ready.go:81] duration metric: took 10.641ms for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.353379    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.353425    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400
	I0307 23:22:03.353425    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.353425    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.353425    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.356971    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.358335    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.358389    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.358389    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.358389    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.362012    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.362974    6816 pod_ready.go:92] pod "etcd-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.362974    6816 pod_ready.go:81] duration metric: took 9.5943ms for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.362974    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.362974    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m02
	I0307 23:22:03.362974    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.362974    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.362974    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.367184    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.368167    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:03.368167    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.368167    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.368167    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.372185    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.373547    6816 pod_ready.go:92] pod "etcd-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.373547    6816 pod_ready.go:81] duration metric: took 10.5739ms for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.373600    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.507402    6816 request.go:629] Waited for 133.8014ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.507655    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.507655    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.507655    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.507655    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.511236    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.710280    6816 request.go:629] Waited for 197.2041ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.710280    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.710280    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.710280    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.710280    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.714900    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.913869    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.913869    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.913869    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.913869    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.918257    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.120879    6816 request.go:629] Waited for 201.188ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.121267    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.121314    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.121314    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.121314    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.126967    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:04.387722    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:04.387914    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.387914    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.387914    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.397154    6816 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 23:22:04.514318    6816 request.go:629] Waited for 115.9844ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.514423    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.514475    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.514475    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.514475    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.518867    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.889193    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:04.889193    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.889193    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.889193    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.893588    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.920302    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.920482    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.920482    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.920482    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.925075    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:05.377905    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:05.377905    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.377905    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.377982    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.395380    6816 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0307 23:22:05.396045    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:05.396045    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.396045    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.396045    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.411649    6816 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0307 23:22:05.412613    6816 pod_ready.go:92] pod "etcd-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:05.412687    6816 pod_ready.go:81] duration metric: took 2.0390687s for pod "etcd-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.412687    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.519271    6816 request.go:629] Waited for 106.305ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:22:05.519393    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:22:05.519393    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.519393    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.519393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.526614    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:05.707794    6816 request.go:629] Waited for 180.1528ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:05.707910    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:05.707910    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.708057    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.708057    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.714767    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:05.715418    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:05.715418    6816 pod_ready.go:81] duration metric: took 302.7279ms for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.715418    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.910848    6816 request.go:629] Waited for 195.2327ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:22:05.910923    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:22:05.910923    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.910923    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.911000    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.915376    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.114415    6816 request.go:629] Waited for 197.3001ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:06.114631    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:06.114631    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.114631    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.114631    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.119331    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.120571    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.120680    6816 pod_ready.go:81] duration metric: took 405.2583ms for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.120680    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.316534    6816 request.go:629] Waited for 195.7646ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m03
	I0307 23:22:06.316677    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m03
	I0307 23:22:06.316737    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.316765    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.316765    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.321514    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.518902    6816 request.go:629] Waited for 195.6939ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:06.518978    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:06.518978    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.518978    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.518978    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.523574    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.524962    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.524962    6816 pod_ready.go:81] duration metric: took 404.2776ms for pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.525142    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.719992    6816 request.go:629] Waited for 194.728ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:22:06.720391    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:22:06.720391    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.720391    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.720463    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.725696    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:06.909199    6816 request.go:629] Waited for 181.7967ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:06.909469    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:06.909469    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.909469    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.909469    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.914267    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.915515    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.915582    6816 pod_ready.go:81] duration metric: took 390.4362ms for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.915582    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.112189    6816 request.go:629] Waited for 196.2786ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:22:07.112442    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:22:07.112484    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.112484    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.112484    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.118275    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:07.316241    6816 request.go:629] Waited for 196.6902ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:07.316408    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:07.316474    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.316474    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.316474    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.322039    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:07.322693    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:07.322693    6816 pod_ready.go:81] duration metric: took 407.1074ms for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.322693    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.516496    6816 request.go:629] Waited for 193.8008ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m03
	I0307 23:22:07.516587    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m03
	I0307 23:22:07.516587    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.516587    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.516587    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.524279    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:07.719887    6816 request.go:629] Waited for 194.2236ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:07.720094    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:07.720094    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.720196    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.720196    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.726816    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:07.727745    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:07.727830    6816 pod_ready.go:81] duration metric: took 405.1325ms for pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.727867    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rxpp" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.909979    6816 request.go:629] Waited for 182.1101ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2rxpp
	I0307 23:22:07.909979    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2rxpp
	I0307 23:22:07.909979    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.909979    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.909979    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.918860    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:22:08.116254    6816 request.go:629] Waited for 195.4808ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:08.116436    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:08.116535    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.116535    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.116535    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.121668    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.122464    6816 pod_ready.go:92] pod "kube-proxy-2rxpp" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.122531    6816 pod_ready.go:81] duration metric: took 394.6603ms for pod "kube-proxy-2rxpp" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.122531    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.317811    6816 request.go:629] Waited for 195.0091ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:22:08.318045    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:22:08.318122    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.318174    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.318194    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.323124    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:08.522082    6816 request.go:629] Waited for 198.5341ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:08.522082    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:08.522082    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.522082    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.522082    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.526487    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.528075    6816 pod_ready.go:92] pod "kube-proxy-j6wd5" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.528075    6816 pod_ready.go:81] duration metric: took 405.54ms for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.528075    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.707970    6816 request.go:629] Waited for 179.8935ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:22:08.708379    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:22:08.708379    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.708379    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.708379    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.712682    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.911145    6816 request.go:629] Waited for 196.7134ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:08.911304    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:08.911304    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.911304    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.911304    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.916642    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:08.917489    6816 pod_ready.go:92] pod "kube-proxy-zxmcc" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.917489    6816 pod_ready.go:81] duration metric: took 389.4108ms for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.917489    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.115217    6816 request.go:629] Waited for 197.7258ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:22:09.115772    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:22:09.115772    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.115772    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.115772    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.121155    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:09.318165    6816 request.go:629] Waited for 195.3691ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:09.318414    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:09.318414    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.318414    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.318414    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.323815    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:09.324896    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:09.324974    6816 pod_ready.go:81] duration metric: took 407.481ms for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.324974    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.520557    6816 request.go:629] Waited for 195.3123ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:22:09.520690    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:22:09.520690    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.520690    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.520690    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.524864    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:09.710530    6816 request.go:629] Waited for 183.349ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:09.710626    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:09.710626    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.710709    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.710709    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.715032    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:09.716348    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:09.716468    6816 pod_ready.go:81] duration metric: took 391.4897ms for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.716468    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.914528    6816 request.go:629] Waited for 197.6487ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m03
	I0307 23:22:09.914715    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m03
	I0307 23:22:09.914715    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.914715    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.914715    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.920047    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:10.119608    6816 request.go:629] Waited for 198.4471ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:10.119790    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:10.119851    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.119917    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.119976    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.124489    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:10.125544    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:10.125647    6816 pod_ready.go:81] duration metric: took 409.1209ms for pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:10.125647    6816 pod_ready.go:38] duration metric: took 6.8136182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:22:10.125647    6816 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:22:10.137842    6816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:22:10.165264    6816 api_server.go:72] duration metric: took 18.7737861s to wait for apiserver process to appear ...
	I0307 23:22:10.165264    6816 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:22:10.165264    6816 api_server.go:253] Checking apiserver healthz at https://172.20.58.169:8443/healthz ...
	I0307 23:22:10.172335    6816 api_server.go:279] https://172.20.58.169:8443/healthz returned 200:
	ok
	I0307 23:22:10.173021    6816 round_trippers.go:463] GET https://172.20.58.169:8443/version
	I0307 23:22:10.173021    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.173021    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.173021    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.174322    6816 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:22:10.174322    6816 api_server.go:141] control plane version: v1.28.4
	I0307 23:22:10.174322    6816 api_server.go:131] duration metric: took 9.0577ms to wait for apiserver health ...
	I0307 23:22:10.174322    6816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:22:10.321628    6816 request.go:629] Waited for 147.3046ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.321628    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.321628    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.321628    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.321628    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.331942    6816 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0307 23:22:10.344822    6816 system_pods.go:59] 24 kube-system pods found
	I0307 23:22:10.344822    6816 system_pods.go:61] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400-m03" [048f57d4-7047-45b1-b865-e5768ce81ebf] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-nwgxl" [07d0d037-8522-4af4-9c41-d05bad3c2753] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400-m03" [f689ec77-3fff-48a7-bef0-6ca89dbae7fa] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m03" [e58b980b-940b-4da9-868a-d5c6d7d8b8e3] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-2rxpp" [ea9a7d5a-b760-4056-ab38-cfa70276c427] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400-m03" [daaf3e0b-85a8-4d7f-998b-3c07e04d010b] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400-m03" [eb0f9382-0ea4-4cb2-9c1e-06d1f891ab99] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:22:10.345362    6816 system_pods.go:74] duration metric: took 171.0377ms to wait for pod list to return data ...
	I0307 23:22:10.345362    6816 default_sa.go:34] waiting for default service account to be created ...
	I0307 23:22:10.509632    6816 request.go:629] Waited for 163.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:22:10.509632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:22:10.509632    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.509632    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.509632    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.516303    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:10.517001    6816 default_sa.go:45] found service account: "default"
	I0307 23:22:10.517069    6816 default_sa.go:55] duration metric: took 171.7054ms for default service account to be created ...
	I0307 23:22:10.517069    6816 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 23:22:10.712068    6816 request.go:629] Waited for 194.8586ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.712068    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.712068    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.712068    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.712068    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.720950    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:22:10.731284    6816 system_pods.go:86] 24 kube-system pods found
	I0307 23:22:10.731284    6816 system_pods.go:89] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400-m03" [048f57d4-7047-45b1-b865-e5768ce81ebf] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-nwgxl" [07d0d037-8522-4af4-9c41-d05bad3c2753] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:22:10.731862    6816 system_pods.go:89] "kube-apiserver-ha-792400-m03" [f689ec77-3fff-48a7-bef0-6ca89dbae7fa] Running
	I0307 23:22:10.731919    6816 system_pods.go:89] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m03" [e58b980b-940b-4da9-868a-d5c6d7d8b8e3] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-2rxpp" [ea9a7d5a-b760-4056-ab38-cfa70276c427] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400-m03" [daaf3e0b-85a8-4d7f-998b-3c07e04d010b] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400-m03" [eb0f9382-0ea4-4cb2-9c1e-06d1f891ab99] Running
	I0307 23:22:10.732181    6816 system_pods.go:89] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:22:10.732181    6816 system_pods.go:126] duration metric: took 215.1106ms to wait for k8s-apps to be running ...
	I0307 23:22:10.732181    6816 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 23:22:10.743666    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:22:10.768375    6816 system_svc.go:56] duration metric: took 36.1347ms WaitForService to wait for kubelet
	I0307 23:22:10.768375    6816 kubeadm.go:576] duration metric: took 19.376972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:22:10.768454    6816 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:22:10.916510    6816 request.go:629] Waited for 147.9773ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes
	I0307 23:22:10.916741    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes
	I0307 23:22:10.916741    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.916741    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.916741    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.921833    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:10.923806    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923806    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923871    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923871    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:105] duration metric: took 155.4152ms to run NodePressure ...
	I0307 23:22:10.923871    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:22:10.923871    6816 start.go:254] writing updated cluster config ...
	I0307 23:22:10.935632    6816 ssh_runner.go:195] Run: rm -f paused
	I0307 23:22:11.075440    6816 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 23:22:11.078840    6816 out.go:177] * Done! kubectl is now configured to use "ha-792400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.708931596Z" level=info msg="shim disconnected" id=2daf2cbbe82d3a521289817e25889c3648a5173475004c4613e5691e15669dea namespace=moby
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.708999698Z" level=warning msg="cleaning up after shim disconnected" id=2daf2cbbe82d3a521289817e25889c3648a5173475004c4613e5691e15669dea namespace=moby
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.709011999Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1308]: time="2024-03-07T23:18:12.178688324Z" level=info msg="ignoring event" container=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.182561552Z" level=info msg="shim disconnected" id=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.182833361Z" level=warning msg="cleaning up after shim disconnected" id=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.183001067Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333036326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333357337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333466541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333760750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308005359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308192665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308232667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308681281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308664929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308886636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308919337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.309344752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 cri-dockerd[1200]: time="2024-03-07T23:22:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd4b0e249592808d75765de1fc6ca7e6e072768f1ca17d13c7e995c224c3d131/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 23:22:48 ha-792400 cri-dockerd[1200]: time="2024-03-07T23:22:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.035744772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036125174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036373175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036854078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cb1b44317c3b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   fd4b0e2495928       busybox-5b5d89c9d6-wmtt9
	0315e442ba536       22aaebb38f4a9                                                                                         5 minutes ago        Running             kube-vip                  1                   2aa33ef112e26       kube-vip-ha-792400
	9538b967bece1       6e38f40d628db                                                                                         5 minutes ago        Running             storage-provisioner       1                   d74f2c3b71b39       storage-provisioner
	3fc0d637315e9       ead0a4a53df89                                                                                         9 minutes ago        Running             coredns                   0                   355749546e87f       coredns-5dd5756b68-28rtr
	0813d71e015b1       ead0a4a53df89                                                                                         9 minutes ago        Running             coredns                   0                   6c7c323c35782       coredns-5dd5756b68-rx9dg
	2daf2cbbe82d3       6e38f40d628db                                                                                         9 minutes ago        Exited              storage-provisioner       0                   d74f2c3b71b39       storage-provisioner
	acd6e0511261f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   55a843de34893       kindnet-7bztm
	59baf1bee5fee       83f6cc407eed8                                                                                         9 minutes ago        Running             kube-proxy                0                   2ed7ae465f26f       kube-proxy-zxmcc
	20e4ebbcc8a68       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     9 minutes ago        Exited              kube-vip                  0                   2aa33ef112e26       kube-vip-ha-792400
	45cfa4cc5c464       d058aa5ab969c                                                                                         9 minutes ago        Running             kube-controller-manager   0                   762cca51fa8d5       kube-controller-manager-ha-792400
	7f9766203c094       e3db313c6dbc0                                                                                         9 minutes ago        Running             kube-scheduler            0                   0e9ab11944533       kube-scheduler-ha-792400
	678da783bb32e       7fe0e6f37db33                                                                                         9 minutes ago        Running             kube-apiserver            0                   38ae89ab9f3cc       kube-apiserver-ha-792400
	8913a536cdd19       73deb9a3f7025                                                                                         9 minutes ago        Running             etcd                      0                   5aff95ebbe774       etcd-ha-792400
	
	
	==> coredns [0813d71e015b] <==
	[INFO] 10.244.1.2:58689 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107701s
	[INFO] 10.244.2.2:56925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033465166s
	[INFO] 10.244.2.2:59838 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094701s
	[INFO] 10.244.2.2:55718 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121501s
	[INFO] 10.244.2.2:49031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162501s
	[INFO] 10.244.2.2:44274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184201s
	[INFO] 10.244.0.4:33345 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000070701s
	[INFO] 10.244.0.4:40600 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144201s
	[INFO] 10.244.0.4:59482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124501s
	[INFO] 10.244.1.2:48839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166401s
	[INFO] 10.244.1.2:49792 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000548s
	[INFO] 10.244.1.2:37296 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000634s
	[INFO] 10.244.1.2:52625 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167301s
	[INFO] 10.244.2.2:49914 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001306s
	[INFO] 10.244.2.2:49704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132401s
	[INFO] 10.244.2.2:58265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000539s
	[INFO] 10.244.0.4:35424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000942s
	[INFO] 10.244.0.4:39973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109101s
	[INFO] 10.244.1.2:41011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185201s
	[INFO] 10.244.1.2:54371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100001s
	[INFO] 10.244.1.2:46308 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000916s
	[INFO] 10.244.2.2:45164 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114001s
	[INFO] 10.244.0.4:58909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130901s
	[INFO] 10.244.0.4:42049 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000709s
	[INFO] 10.244.0.4:46367 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000587904s
	
	
	==> coredns [3fc0d637315e] <==
	[INFO] 10.244.1.2:55904 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.148342111s
	[INFO] 10.244.1.2:57656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166704326s
	[INFO] 10.244.2.2:38876 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.030427749s
	[INFO] 10.244.2.2:52435 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000162001s
	[INFO] 10.244.0.4:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000397102s
	[INFO] 10.244.1.2:49148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195601s
	[INFO] 10.244.1.2:41844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.048285741s
	[INFO] 10.244.1.2:34705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186701s
	[INFO] 10.244.1.2:47785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110001s
	[INFO] 10.244.2.2:49603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000967s
	[INFO] 10.244.2.2:51221 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010920754s
	[INFO] 10.244.2.2:51671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000741s
	[INFO] 10.244.0.4:59914 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001468s
	[INFO] 10.244.0.4:40006 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230301s
	[INFO] 10.244.0.4:57558 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000147801s
	[INFO] 10.244.0.4:43569 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111701s
	[INFO] 10.244.0.4:42521 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001403s
	[INFO] 10.244.2.2:45389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177501s
	[INFO] 10.244.0.4:53457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276301s
	[INFO] 10.244.0.4:47763 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153501s
	[INFO] 10.244.1.2:50765 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000240302s
	[INFO] 10.244.2.2:41069 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251502s
	[INFO] 10.244.2.2:55299 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000172601s
	[INFO] 10.244.2.2:49701 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000874s
	[INFO] 10.244.0.4:51908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169701s
	
	
	==> describe nodes <==
	Name:               ha-792400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T23_14_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:23:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:22:53 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:22:53 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:22:53 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:22:53 +0000   Thu, 07 Mar 2024 23:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.58.169
	  Hostname:    ha-792400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 518f80544d79436691eb013fb81341e0
	  System UUID:                4e875024-2316-c944-8dba-40e02e382e31
	  Boot ID:                    5470a58a-ec3e-4fa3-9eae-64bab2e66d3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wmtt9             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 coredns-5dd5756b68-28rtr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m17s
	  kube-system                 coredns-5dd5756b68-rx9dg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m17s
	  kube-system                 etcd-ha-792400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m30s
	  kube-system                 kindnet-7bztm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m17s
	  kube-system                 kube-apiserver-ha-792400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-controller-manager-ha-792400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-proxy-zxmcc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-scheduler-ha-792400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-vip-ha-792400                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m16s  kube-proxy       
	  Normal  Starting                 9m30s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m30s  kubelet          Node ha-792400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s  kubelet          Node ha-792400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s  kubelet          Node ha-792400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m18s  node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	  Normal  NodeReady                9m6s   kubelet          Node ha-792400 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	  Normal  RegisteredNode           105s   node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	
	
	Name:               ha-792400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_07T23_18_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:18:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:23:03 +0000   Thu, 07 Mar 2024 23:18:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:23:03 +0000   Thu, 07 Mar 2024 23:18:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:23:03 +0000   Thu, 07 Mar 2024 23:18:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:23:03 +0000   Thu, 07 Mar 2024 23:18:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.50.199
	  Hostname:    ha-792400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 c06e0c0854ca4c2588f630a0a76a7d32
	  System UUID:                09cbc96a-b12f-7641-9990-7acdf96b88ef
	  Boot ID:                    07286d93-0fba-4108-933d-df1b049fc5bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-8vztn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-792400-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m49s
	  kube-system                 kindnet-fvx87                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m49s
	  kube-system                 kube-apiserver-ha-792400-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-controller-manager-ha-792400-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-proxy-j6wd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-ha-792400-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-vip-ha-792400-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m30s  kube-proxy       
	  Normal  RegisteredNode  5m48s  node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	  Normal  RegisteredNode  5m19s  node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	  Normal  RegisteredNode  105s   node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	
	
	Name:               ha-792400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_07T23_21_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:23:17 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:23:17 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:23:17 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:23:17 +0000   Thu, 07 Mar 2024 23:22:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.59.36
	  Hostname:    ha-792400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 2632989d3c70459290afc2ae7511010b
	  System UUID:                6840328b-e690-ab4b-a122-61c112570da5
	  Boot ID:                    b813436e-fd9d-48ec-9666-c69e1df60d6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dswbq                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-792400-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m4s
	  kube-system                 kindnet-nwgxl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m4s
	  kube-system                 kube-apiserver-ha-792400-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-ha-792400-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-2rxpp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-scheduler-ha-792400-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-vip-ha-792400-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        2m    kube-proxy       
	  Normal  RegisteredNode  2m4s  node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	  Normal  RegisteredNode  2m3s  node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	  Normal  RegisteredNode  105s  node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 7 23:13] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.147326] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +25.627823] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +0.082793] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.457977] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[  +0.157469] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.199615] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +1.733500] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.179026] systemd-fstab-generator[1165]: Ignoring "noauto" option for root device
	[  +0.162468] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.236273] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[Mar 7 23:14] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.084202] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.582960] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +5.798457] systemd-fstab-generator[1751]: Ignoring "noauto" option for root device
	[  +0.084148] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.319481] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.524949] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[ +13.749359] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.845161] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.194244] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 7 23:18] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.952770] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [8913a536cdd1] <==
	{"level":"info","ts":"2024-03-07T23:21:56.597801Z","caller":"traceutil/trace.go:171","msg":"trace[1432614876] transaction","detail":"{read_only:false; response_revision:1441; number_of_response:1; }","duration":"184.144489ms","start":"2024-03-07T23:21:56.413642Z","end":"2024-03-07T23:21:56.597787Z","steps":["trace[1432614876] 'process raft request'  (duration: 183.947582ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T23:21:56.757704Z","caller":"traceutil/trace.go:171","msg":"trace[433956942] linearizableReadLoop","detail":"{readStateIndex:1613; appliedIndex:1613; }","duration":"107.203103ms","start":"2024-03-07T23:21:56.650486Z","end":"2024-03-07T23:21:56.75769Z","steps":["trace[433956942] 'read index received'  (duration: 107.197603ms)","trace[433956942] 'applied index is now lower than readState.Index'  (duration: 4.6µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T23:21:56.76988Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.400614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:13520"}
	{"level":"info","ts":"2024-03-07T23:21:56.770051Z","caller":"traceutil/trace.go:171","msg":"trace[737670030] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:1441; }","duration":"119.58002ms","start":"2024-03-07T23:21:56.650461Z","end":"2024-03-07T23:21:56.770041Z","steps":["trace[737670030] 'agreement among raft nodes before linearized reading'  (duration: 107.373209ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T23:21:56.770492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.009396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-07T23:21:56.770786Z","caller":"traceutil/trace.go:171","msg":"trace[1594619150] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1442; }","duration":"107.305906ms","start":"2024-03-07T23:21:56.663466Z","end":"2024-03-07T23:21:56.770772Z","steps":["trace[1594619150] 'agreement among raft nodes before linearized reading'  (duration: 106.907593ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-07T23:21:56.771287Z","caller":"traceutil/trace.go:171","msg":"trace[1692710363] transaction","detail":"{read_only:false; response_revision:1442; number_of_response:1; }","duration":"154.521694ms","start":"2024-03-07T23:21:56.616752Z","end":"2024-03-07T23:21:56.771274Z","steps":["trace[1692710363] 'process raft request'  (duration: 139.732797ms)","trace[1692710363] 'compare'  (duration: 13.240645ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T23:21:57.488906Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c9c0166b4b2cbaa5","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"58.286882ms"}
	{"level":"warn","ts":"2024-03-07T23:21:57.488975Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"918c2185a187a7c3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"58.361784ms"}
	{"level":"info","ts":"2024-03-07T23:21:57.491043Z","caller":"traceutil/trace.go:171","msg":"trace[1818018009] linearizableReadLoop","detail":"{readStateIndex:1615; appliedIndex:1615; }","duration":"173.722739ms","start":"2024-03-07T23:21:57.317304Z","end":"2024-03-07T23:21:57.491027Z","steps":["trace[1818018009] 'read index received'  (duration: 173.717539ms)","trace[1818018009] 'applied index is now lower than readState.Index'  (duration: 3.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T23:21:57.837476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"520.219886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-07T23:21:57.83762Z","caller":"traceutil/trace.go:171","msg":"trace[940331953] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1443; }","duration":"520.377091ms","start":"2024-03-07T23:21:57.317229Z","end":"2024-03-07T23:21:57.837606Z","steps":["trace[940331953] 'agreement among raft nodes before linearized reading'  (duration: 174.249657ms)","trace[940331953] 'range keys from in-memory index tree'  (duration: 345.915227ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T23:21:57.837696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T23:21:57.317216Z","time spent":"520.454593ms","remote":"127.0.0.1:56096","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":457,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"warn","ts":"2024-03-07T23:21:57.837925Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.815725ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7725236983280123516 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1438 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-07T23:21:57.837992Z","caller":"traceutil/trace.go:171","msg":"trace[465808934] linearizableReadLoop","detail":"{readStateIndex:1616; appliedIndex:1615; }","duration":"346.633251ms","start":"2024-03-07T23:21:57.491351Z","end":"2024-03-07T23:21:57.837984Z","steps":["trace[465808934] 'read index received'  (duration: 1.432448ms)","trace[465808934] 'applied index is now lower than readState.Index'  (duration: 345.200003ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-07T23:21:57.838282Z","caller":"traceutil/trace.go:171","msg":"trace[658267887] transaction","detail":"{read_only:false; response_revision:1444; number_of_response:1; }","duration":"600.433581ms","start":"2024-03-07T23:21:57.237794Z","end":"2024-03-07T23:21:57.838228Z","steps":["trace[658267887] 'process raft request'  (duration: 251.254145ms)","trace[658267887] 'compare'  (duration: 348.676119ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-07T23:21:57.838328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T23:21:57.237763Z","time spent":"600.542086ms","remote":"127.0.0.1:56040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1438 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-07T23:21:57.838484Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"431.930818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-ha-792400-m03\" ","response":"range_response_count:1 size:6343"}
	{"level":"info","ts":"2024-03-07T23:21:57.838504Z","caller":"traceutil/trace.go:171","msg":"trace[1338411733] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-ha-792400-m03; range_end:; response_count:1; response_revision:1444; }","duration":"431.953318ms","start":"2024-03-07T23:21:57.406544Z","end":"2024-03-07T23:21:57.838498Z","steps":["trace[1338411733] 'agreement among raft nodes before linearized reading'  (duration: 431.905617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T23:21:57.838523Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T23:21:57.406532Z","time spent":"431.98582ms","remote":"127.0.0.1:56052","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":6365,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-ha-792400-m03\" "}
	{"level":"warn","ts":"2024-03-07T23:21:57.838652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"496.092874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-792400-m03\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-03-07T23:21:57.83867Z","caller":"traceutil/trace.go:171","msg":"trace[965171864] range","detail":"{range_begin:/registry/minions/ha-792400-m03; range_end:; response_count:1; response_revision:1444; }","duration":"496.110775ms","start":"2024-03-07T23:21:57.342554Z","end":"2024-03-07T23:21:57.838665Z","steps":["trace[965171864] 'agreement among raft nodes before linearized reading'  (duration: 496.071174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-07T23:21:57.838683Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-07T23:21:57.342544Z","time spent":"496.136076ms","remote":"127.0.0.1:56044","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":4463,"request content":"key:\"/registry/minions/ha-792400-m03\" "}
	{"level":"warn","ts":"2024-03-07T23:21:57.840379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.334187ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-07T23:21:57.840436Z","caller":"traceutil/trace.go:171","msg":"trace[1301781311] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1444; }","duration":"154.393089ms","start":"2024-03-07T23:21:57.686034Z","end":"2024-03-07T23:21:57.840427Z","steps":["trace[1301781311] 'agreement among raft nodes before linearized reading'  (duration: 154.312686ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:23:50 up 11 min,  0 users,  load average: 1.14, 1.04, 0.61
	Linux ha-792400 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [acd6e0511261] <==
	I0307 23:23:06.944977       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:23:16.964523       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:23:16.964612       1 main.go:227] handling current node
	I0307 23:23:16.964627       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:23:16.964634       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:23:16.964763       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:23:16.964921       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:23:26.979012       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:23:26.979053       1 main.go:227] handling current node
	I0307 23:23:26.979064       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:23:26.979070       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:23:26.979559       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:23:26.979683       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:23:36.995693       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:23:36.995735       1 main.go:227] handling current node
	I0307 23:23:36.995747       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:23:36.995754       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:23:36.996104       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:23:36.996195       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:23:47.006635       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:23:47.006751       1 main.go:227] handling current node
	I0307 23:23:47.006765       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:23:47.006773       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:23:47.007414       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:23:47.007512       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [678da783bb32] <==
	Trace[1087315783]: ["Create etcd3" audit-id:109bdac7-e171-40c7-98cd-f25a77ba9b65,key:/pods/kube-system/kube-controller-manager-ha-792400-m02,type:*core.Pod,resource:pods 6135ms (23:18:10.292)
	Trace[1087315783]:  ---"Txn call succeeded" 6124ms (23:18:16.416)]
	Trace[1087315783]: [6.153906966s] [6.153906966s] END
	I0307 23:18:16.437523       1 trace.go:236] Trace[1266729173]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b0f120ff-a473-4c8d-b57c-b274b9b24484,client:172.20.50.199,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (07-Mar-2024 23:18:10.284) (total time: 6152ms):
	Trace[1266729173]: ["Create etcd3" audit-id:b0f120ff-a473-4c8d-b57c-b274b9b24484,key:/pods/kube-system/kube-apiserver-ha-792400-m02,type:*core.Pod,resource:pods 6137ms (23:18:10.299)
	Trace[1266729173]:  ---"Txn call succeeded" 6123ms (23:18:16.423)]
	Trace[1266729173]: [6.15280543s] [6.15280543s] END
	I0307 23:18:16.454081       1 trace.go:236] Trace[1801165914]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:92490f20-a477-4c97-8953-183c86098a2e,client:172.20.50.199,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-792400-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (07-Mar-2024 23:18:11.561) (total time: 4892ms):
	Trace[1801165914]: ["GuaranteedUpdate etcd3" audit-id:92490f20-a477-4c97-8953-183c86098a2e,key:/minions/ha-792400-m02,type:*core.Node,resource:nodes 4892ms (23:18:11.561)
	Trace[1801165914]:  ---"Txn call completed" 4856ms (23:18:16.422)]
	Trace[1801165914]: ---"About to apply patch" 4856ms (23:18:16.422)
	Trace[1801165914]: ---"Object stored in database" 29ms (23:18:16.453)
	Trace[1801165914]: [4.892948891s] [4.892948891s] END
	I0307 23:18:16.455339       1 trace.go:236] Trace[980215000]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e04729aa-135a-4e8a-abe7-733be0829485,client:172.20.50.199,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (07-Mar-2024 23:18:10.286) (total time: 6168ms):
	Trace[980215000]: ["Create etcd3" audit-id:e04729aa-135a-4e8a-abe7-733be0829485,key:/pods/kube-system/kube-scheduler-ha-792400-m02,type:*core.Pod,resource:pods 6157ms (23:18:10.298)
	Trace[980215000]:  ---"Txn call succeeded" 6129ms (23:18:16.427)]
	Trace[980215000]: ---"Write to database call failed" len:1220,err:pods "kube-scheduler-ha-792400-m02" already exists 27ms (23:18:16.455)
	Trace[980215000]: [6.168439747s] [6.168439747s] END
	I0307 23:21:57.838556       1 trace.go:236] Trace[906739485]: "Get" accept:application/json, */*,audit-id:729615eb-efb9-4aca-b017-2e0027e622d1,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Mar-2024 23:21:57.316) (total time: 521ms):
	Trace[906739485]: ---"About to write a response" 521ms (23:21:57.838)
	Trace[906739485]: [521.884041ms] [521.884041ms] END
	I0307 23:21:57.841697       1 trace.go:236] Trace[1407565757]: "Update" accept:application/json, */*,audit-id:db8ad77c-6a67-47ed-8ffb-8531216ea29f,client:172.20.58.169,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-Mar-2024 23:21:57.236) (total time: 605ms):
	Trace[1407565757]: ["GuaranteedUpdate etcd3" audit-id:db8ad77c-6a67-47ed-8ffb-8531216ea29f,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 605ms (23:21:57.236)
	Trace[1407565757]:  ---"Txn call completed" 604ms (23:21:57.841)]
	Trace[1407565757]: [605.557454ms] [605.557454ms] END
	
	
	==> kube-controller-manager [45cfa4cc5c46] <==
	I0307 23:21:46.578094       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-7fkl9"
	I0307 23:21:47.674687       1 event.go:307] "Event occurred" object="ha-792400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller"
	I0307 23:21:47.695169       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-792400-m03"
	I0307 23:22:46.288136       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 3"
	I0307 23:22:46.344009       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-dswbq"
	I0307 23:22:46.406955       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-pzdrp"
	I0307 23:22:46.409005       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-wmtt9"
	I0307 23:22:46.470185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="183.245266ms"
	I0307 23:22:46.607737       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-pzdrp"
	I0307 23:22:46.675511       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-vswxr"
	I0307 23:22:46.675569       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-zgznw"
	I0307 23:22:46.829799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="359.242888ms"
	I0307 23:22:46.874424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.391994ms"
	I0307 23:22:46.874590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="132.904µs"
	I0307 23:22:47.063775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="81.887555ms"
	I0307 23:22:47.063940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.103µs"
	I0307 23:22:47.908024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.603µs"
	I0307 23:22:47.934513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.502µs"
	I0307 23:22:47.963147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="75.102µs"
	I0307 23:22:49.416115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.482965ms"
	I0307 23:22:49.416715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="83.901µs"
	I0307 23:22:49.694074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.286621ms"
	I0307 23:22:49.694548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="207.501µs"
	I0307 23:22:49.969957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.528857ms"
	I0307 23:22:49.970632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.1µs"
	
	
	==> kube-proxy [59baf1bee5fe] <==
	I0307 23:14:34.497587       1 server_others.go:69] "Using iptables proxy"
	I0307 23:14:34.511825       1 node.go:141] Successfully retrieved node IP: 172.20.58.169
	I0307 23:14:34.566839       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 23:14:34.566913       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 23:14:34.573041       1 server_others.go:152] "Using iptables Proxier"
	I0307 23:14:34.573175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 23:14:34.574019       1 server.go:846] "Version info" version="v1.28.4"
	I0307 23:14:34.574148       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 23:14:34.575768       1 config.go:188] "Starting service config controller"
	I0307 23:14:34.575817       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 23:14:34.575847       1 config.go:97] "Starting endpoint slice config controller"
	I0307 23:14:34.575856       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 23:14:34.576563       1 config.go:315] "Starting node config controller"
	I0307 23:14:34.576600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 23:14:34.677153       1 shared_informer.go:318] Caches are synced for node config
	I0307 23:14:34.677555       1 shared_informer.go:318] Caches are synced for service config
	I0307 23:14:34.677669       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7f9766203c09] <==
	E0307 23:14:18.183206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 23:14:18.216088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 23:14:18.217090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 23:14:18.304886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 23:14:18.304987       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0307 23:14:20.657517       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0307 23:21:46.417967       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nwgxl\": pod kindnet-nwgxl is already assigned to node \"ha-792400-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-nwgxl" node="ha-792400-m03"
	E0307 23:21:46.418537       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 07d0d037-8522-4af4-9c41-d05bad3c2753(kube-system/kindnet-nwgxl) wasn't assumed so cannot be forgotten"
	E0307 23:21:46.420543       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nwgxl\": pod kindnet-nwgxl is already assigned to node \"ha-792400-m03\"" pod="kube-system/kindnet-nwgxl"
	I0307 23:21:46.421086       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nwgxl" node="ha-792400-m03"
	E0307 23:21:46.421470       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2rxpp\": pod kube-proxy-2rxpp is already assigned to node \"ha-792400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2rxpp" node="ha-792400-m03"
	E0307 23:21:46.421559       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ea9a7d5a-b760-4056-ab38-cfa70276c427(kube-system/kube-proxy-2rxpp) wasn't assumed so cannot be forgotten"
	E0307 23:21:46.421803       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2rxpp\": pod kube-proxy-2rxpp is already assigned to node \"ha-792400-m03\"" pod="kube-system/kube-proxy-2rxpp"
	I0307 23:21:46.421906       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2rxpp" node="ha-792400-m03"
	E0307 23:21:46.523949       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pmrsb\": pod kube-proxy-pmrsb is already assigned to node \"ha-792400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pmrsb" node="ha-792400-m03"
	E0307 23:21:46.524782       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 389047d5-9f21-4181-982f-295e7d20e5cf(kube-system/kube-proxy-pmrsb) wasn't assumed so cannot be forgotten"
	E0307 23:21:46.525013       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pmrsb\": pod kube-proxy-pmrsb is already assigned to node \"ha-792400-m03\"" pod="kube-system/kube-proxy-pmrsb"
	I0307 23:21:46.525228       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pmrsb" node="ha-792400-m03"
	I0307 23:22:46.378915       1 cache.go:518] "Pod was added to a different node than it was assumed" podKey="709c11ff-324c-401a-826a-318d1ca71260" pod="default/busybox-5b5d89c9d6-dswbq" assumedNode="ha-792400-m03" currentNode="ha-792400-m02"
	E0307 23:22:46.411059       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dswbq\": pod busybox-5b5d89c9d6-dswbq is already assigned to node \"ha-792400-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-dswbq" node="ha-792400-m02"
	E0307 23:22:46.417402       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 709c11ff-324c-401a-826a-318d1ca71260(default/busybox-5b5d89c9d6-dswbq) was assumed on ha-792400-m02 but assigned to ha-792400-m03"
	E0307 23:22:46.417855       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dswbq\": pod busybox-5b5d89c9d6-dswbq is already assigned to node \"ha-792400-m03\"" pod="default/busybox-5b5d89c9d6-dswbq"
	I0307 23:22:46.418152       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-dswbq" node="ha-792400-m03"
	E0307 23:22:46.429715       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-pzdrp\": pod busybox-5b5d89c9d6-pzdrp is already assigned to node \"ha-792400\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-pzdrp" node="ha-792400"
	E0307 23:22:46.432130       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-pzdrp\": pod busybox-5b5d89c9d6-pzdrp is already assigned to node \"ha-792400\"" pod="default/busybox-5b5d89c9d6-pzdrp"
	
	
	==> kubelet <==
	Mar 07 23:20:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:21:20 ha-792400 kubelet[2501]: E0307 23:21:20.627619    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:21:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:21:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:21:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:21:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:22:20 ha-792400 kubelet[2501]: E0307 23:22:20.637381    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:22:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:22:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:22:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:22:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: I0307 23:22:46.430361    2501 topology_manager.go:215] "Topology Admit Handler" podUID="79d5ed09-31ea-449f-987d-39727287c282" podNamespace="default" podName="busybox-5b5d89c9d6-pzdrp"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: I0307 23:22:46.445683    2501 topology_manager.go:215] "Topology Admit Handler" podUID="228c2c21-2114-4a27-bf7d-55a00f08f8bd" podNamespace="default" podName="busybox-5b5d89c9d6-wmtt9"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: I0307 23:22:46.591505    2501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgttn\" (UniqueName: \"kubernetes.io/projected/79d5ed09-31ea-449f-987d-39727287c282-kube-api-access-rgttn\") pod \"busybox-5b5d89c9d6-pzdrp\" (UID: \"79d5ed09-31ea-449f-987d-39727287c282\") " pod="default/busybox-5b5d89c9d6-pzdrp"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: I0307 23:22:46.591586    2501 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpv8h\" (UniqueName: \"kubernetes.io/projected/228c2c21-2114-4a27-bf7d-55a00f08f8bd-kube-api-access-wpv8h\") pod \"busybox-5b5d89c9d6-wmtt9\" (UID: \"228c2c21-2114-4a27-bf7d-55a00f08f8bd\") " pod="default/busybox-5b5d89c9d6-wmtt9"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: E0307 23:22:46.605165    2501 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rgttn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-5b5d89c9d6-pzdrp" podUID="79d5ed09-31ea-449f-987d-39727287c282"
	Mar 07 23:22:46 ha-792400 kubelet[2501]: I0307 23:22:46.994948    2501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgttn\" (UniqueName: \"kubernetes.io/projected/79d5ed09-31ea-449f-987d-39727287c282-kube-api-access-rgttn\") pod \"79d5ed09-31ea-449f-987d-39727287c282\" (UID: \"79d5ed09-31ea-449f-987d-39727287c282\") "
	Mar 07 23:22:47 ha-792400 kubelet[2501]: I0307 23:22:47.001566    2501 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d5ed09-31ea-449f-987d-39727287c282-kube-api-access-rgttn" (OuterVolumeSpecName: "kube-api-access-rgttn") pod "79d5ed09-31ea-449f-987d-39727287c282" (UID: "79d5ed09-31ea-449f-987d-39727287c282"). InnerVolumeSpecName "kube-api-access-rgttn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 23:22:47 ha-792400 kubelet[2501]: I0307 23:22:47.096444    2501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rgttn\" (UniqueName: \"kubernetes.io/projected/79d5ed09-31ea-449f-987d-39727287c282-kube-api-access-rgttn\") on node \"ha-792400\" DevicePath \"\""
	Mar 07 23:22:48 ha-792400 kubelet[2501]: I0307 23:22:48.577643    2501 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="79d5ed09-31ea-449f-987d-39727287c282" path="/var/lib/kubelet/pods/79d5ed09-31ea-449f-987d-39727287c282/volumes"
	Mar 07 23:23:20 ha-792400 kubelet[2501]: E0307 23:23:20.633139    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:23:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:23:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:23:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:23:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:23:42.982632    6608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-792400 -n ha-792400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-792400 -n ha-792400: (11.650765s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/PingHostFromPods (66.62s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (185.6s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 node start m02 -v=7 --alsologtostderr
E0307 23:39:58.870187    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 node start m02 -v=7 --alsologtostderr: exit status 1 (1m51.1611706s)

                                                
                                                
-- stdout --
	* Starting "ha-792400-m02" control-plane node in "ha-792400" cluster
	* Restarting existing hyperv VM for "ha-792400-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:39:47.256006   10756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0307 23:39:47.339388   10756 out.go:291] Setting OutFile to fd 848 ...
	I0307 23:39:47.354831   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:39:47.354831   10756 out.go:304] Setting ErrFile to fd 704...
	I0307 23:39:47.354831   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:39:47.370036   10756 mustload.go:65] Loading cluster: ha-792400
	I0307 23:39:47.372140   10756 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:39:47.373038   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:39:49.316780   10756 main.go:141] libmachine: [stdout =====>] : Off
	
	I0307 23:39:49.316780   10756 main.go:141] libmachine: [stderr =====>] : 
	W0307 23:39:49.316780   10756 host.go:58] "ha-792400-m02" host status: Stopped
	I0307 23:39:49.320776   10756 out.go:177] * Starting "ha-792400-m02" control-plane node in "ha-792400" cluster
	I0307 23:39:49.323328   10756 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:39:49.323632   10756 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 23:39:49.323773   10756 cache.go:56] Caching tarball of preloaded images
	I0307 23:39:49.323800   10756 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:39:49.323800   10756 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:39:49.324392   10756 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:39:49.327058   10756 start.go:360] acquireMachinesLock for ha-792400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:39:49.327123   10756 start.go:364] duration metric: took 64.9µs to acquireMachinesLock for "ha-792400-m02"
	I0307 23:39:49.327123   10756 start.go:96] Skipping create...Using existing machine configuration
	I0307 23:39:49.327123   10756 fix.go:54] fixHost starting: m02
	I0307 23:39:49.327839   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:39:51.299750   10756 main.go:141] libmachine: [stdout =====>] : Off
	
	I0307 23:39:51.301128   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:51.301128   10756 fix.go:112] recreateIfNeeded on ha-792400-m02: state=Stopped err=<nil>
	W0307 23:39:51.301128   10756 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 23:39:51.304452   10756 out.go:177] * Restarting existing hyperv VM for "ha-792400-m02" ...
	I0307 23:39:51.307030   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m02
	I0307 23:39:54.170904   10756 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:39:54.170979   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:54.170979   10756 main.go:141] libmachine: Waiting for host to start...
	I0307 23:39:54.171058   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:39:56.233048   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:56.233048   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:56.233048   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:58.587289   10756 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:39:58.587289   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:59.600996   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:01.666297   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:01.666372   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:01.666372   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:04.010919   10756 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:40:04.010919   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:05.013861   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:07.039515   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:07.039561   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:07.039710   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:09.350576   10756 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:40:09.350576   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:10.361247   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:12.366311   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:12.366311   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:12.377271   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:14.708727   10756 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:40:14.719581   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:15.728697   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:17.760255   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:17.760255   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:17.760584   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:20.060618   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:20.060618   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:20.073370   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:22.007700   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:22.007700   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:22.007700   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:24.310235   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:24.310235   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:24.321925   10756 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:40:24.324912   10756 machine.go:94] provisionDockerMachine start ...
	I0307 23:40:24.325041   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:26.266746   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:26.266746   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:26.266746   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:28.586750   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:28.586750   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:28.593885   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:40:28.594278   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:40:28.594278   10756 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:40:28.735479   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:40:28.735479   10756 buildroot.go:166] provisioning hostname "ha-792400-m02"
	I0307 23:40:28.735479   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:30.660928   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:30.660928   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:30.660928   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:32.978775   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:32.978775   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:32.984498   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:40:32.984932   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:40:32.985000   10756 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400-m02 && echo "ha-792400-m02" | sudo tee /etc/hostname
	I0307 23:40:33.133205   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m02
	
	I0307 23:40:33.133326   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:35.062216   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:35.062216   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:35.062216   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:37.369900   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:37.369900   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:37.375200   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:40:37.376027   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:40:37.376027   10756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:40:37.520807   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:40:37.520807   10756 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:40:37.520807   10756 buildroot.go:174] setting up certificates
	I0307 23:40:37.520807   10756 provision.go:84] configureAuth start
	I0307 23:40:37.520807   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:39.461012   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:39.461012   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:39.461012   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:41.807821   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:41.807902   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:41.808023   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:43.803190   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:43.803190   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:43.803190   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:46.180069   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:46.180145   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:46.180145   10756 provision.go:143] copyHostCerts
	I0307 23:40:46.180304   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:40:46.180304   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:40:46.180304   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:40:46.181168   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:40:46.182089   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:40:46.182730   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:40:46.182799   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:40:46.182799   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:40:46.183975   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:40:46.184712   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:40:46.184712   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:40:46.184922   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:40:46.185630   10756 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m02 san=[127.0.0.1 172.20.49.16 ha-792400-m02 localhost minikube]
	I0307 23:40:46.245544   10756 provision.go:177] copyRemoteCerts
	I0307 23:40:46.256267   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:40:46.256267   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:48.270259   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:48.270259   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:48.270259   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:50.673958   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:50.673958   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:50.674496   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:40:50.778823   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5224466s)
	I0307 23:40:50.778939   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:40:50.779410   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:40:50.824446   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:40:50.824913   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 23:40:50.875450   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:40:50.875876   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 23:40:50.917490   10756 provision.go:87] duration metric: took 13.3964869s to configureAuth
	I0307 23:40:50.917536   10756 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:40:50.917961   10756 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:40:50.917961   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:52.933790   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:52.933790   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:52.934277   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:55.328124   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:55.328124   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:55.333622   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:40:55.333730   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:40:55.333730   10756 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:40:55.468241   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:40:55.468334   10756 buildroot.go:70] root file system type: tmpfs
	I0307 23:40:55.468535   10756 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:40:55.468732   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:40:57.499572   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:40:57.500045   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:57.500172   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:40:59.910697   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:40:59.910697   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:40:59.917535   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:40:59.917753   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:40:59.917753   10756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:41:00.086185   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:41:00.086306   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:02.098129   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:02.098529   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:02.098618   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:04.570126   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:04.570892   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:04.575863   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:41:04.576393   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:41:04.576393   10756 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:41:05.998582   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:41:05.998582   10756 machine.go:97] duration metric: took 41.6731481s to provisionDockerMachine
	I0307 23:41:05.998582   10756 start.go:293] postStartSetup for "ha-792400-m02" (driver="hyperv")
	I0307 23:41:05.998582   10756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:41:06.010071   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:41:06.010071   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:08.043015   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:08.043015   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:08.043079   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:10.504940   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:10.505230   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:10.505631   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:41:10.617613   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6074986s)
	I0307 23:41:10.629086   10756 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:41:10.636005   10756 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:41:10.636005   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:41:10.636488   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:41:10.637397   10756 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:41:10.637454   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:41:10.648483   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:41:10.664662   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:41:10.708865   10756 start.go:296] duration metric: took 4.7102384s for postStartSetup
	I0307 23:41:10.719861   10756 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0307 23:41:10.719861   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:12.719223   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:12.719223   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:12.719845   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:15.159256   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:15.159256   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:15.160048   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:41:15.264978   10756 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.5450738s)
	I0307 23:41:15.265082   10756 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0307 23:41:15.277296   10756 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0307 23:41:15.346953   10756 fix.go:56] duration metric: took 1m26.0190201s for fixHost
	I0307 23:41:15.346953   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:17.349646   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:17.349646   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:17.349646   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:19.802200   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:19.802200   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:19.807827   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:41:19.807970   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:41:19.807970   10756 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0307 23:41:19.949509   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709854879.968169500
	
	I0307 23:41:19.949565   10756 fix.go:216] guest clock: 1709854879.968169500
	I0307 23:41:19.949565   10756 fix.go:229] Guest: 2024-03-07 23:41:19.9681695 +0000 UTC Remote: 2024-03-07 23:41:15.3469538 +0000 UTC m=+88.184703901 (delta=4.6212157s)
	I0307 23:41:19.949677   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:21.967095   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:21.967095   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:21.968122   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:24.375041   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:24.375328   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:24.380160   10756 main.go:141] libmachine: Using SSH client type: native
	I0307 23:41:24.380631   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
	I0307 23:41:24.380631   10756 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709854879
	I0307 23:41:24.528349   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:41:19 UTC 2024
	
	I0307 23:41:24.528349   10756 fix.go:236] clock set: Thu Mar  7 23:41:19 UTC 2024
	 (err=<nil>)
	I0307 23:41:24.528349   10756 start.go:83] releasing machines lock for "ha-792400-m02", held for 1m35.2003283s
	I0307 23:41:24.528349   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:26.532528   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:26.532528   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:26.532528   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:28.998342   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:28.998342   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:29.002942   10756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:41:29.003263   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:29.013990   10756 ssh_runner.go:195] Run: systemctl --version
	I0307 23:41:29.013990   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:41:31.089872   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:31.089948   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:31.090037   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:31.090037   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:41:31.090037   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:31.090037   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:41:33.616195   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:33.616195   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:33.617554   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:41:33.639076   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16
	
	I0307 23:41:33.639076   10756 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:41:33.639546   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:41:33.789080   10756 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7860929s)
	I0307 23:41:33.789080   10756 ssh_runner.go:235] Completed: systemctl --version: (4.7750448s)
	I0307 23:41:33.800656   10756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 23:41:33.809345   10756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:41:33.819577   10756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:41:33.846131   10756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:41:33.846131   10756 start.go:494] detecting cgroup driver to use...
	I0307 23:41:33.846131   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:41:33.890381   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:41:33.922136   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:41:33.943734   10756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:41:33.955732   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:41:33.985777   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:41:34.017320   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:41:34.046638   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:41:34.075706   10756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:41:34.106481   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:41:34.134096   10756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:41:34.162773   10756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:41:34.191141   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:41:34.369157   10756 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:41:34.401767   10756 start.go:494] detecting cgroup driver to use...
	I0307 23:41:34.413672   10756 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:41:34.447668   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:41:34.479333   10756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:41:34.520456   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:41:34.552993   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:41:34.590689   10756 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:41:34.649071   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:41:34.671899   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:41:34.715220   10756 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:41:34.732660   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:41:34.749900   10756 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:41:34.790381   10756 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:41:34.977430   10756 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:41:35.161528   10756 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:41:35.161528   10756 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:41:35.203231   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:41:35.387638   10756 ssh_runner.go:195] Run: sudo systemctl restart docker

                                                
                                                
** /stderr **
ha_test.go:422: W0307 23:39:47.256006   10756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:39:47.339388   10756 out.go:291] Setting OutFile to fd 848 ...
I0307 23:39:47.354831   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:39:47.354831   10756 out.go:304] Setting ErrFile to fd 704...
I0307 23:39:47.354831   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:39:47.370036   10756 mustload.go:65] Loading cluster: ha-792400
I0307 23:39:47.372140   10756 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:39:47.373038   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:39:49.316780   10756 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0307 23:39:49.316780   10756 main.go:141] libmachine: [stderr =====>] : 
W0307 23:39:49.316780   10756 host.go:58] "ha-792400-m02" host status: Stopped
I0307 23:39:49.320776   10756 out.go:177] * Starting "ha-792400-m02" control-plane node in "ha-792400" cluster
I0307 23:39:49.323328   10756 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0307 23:39:49.323632   10756 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0307 23:39:49.323773   10756 cache.go:56] Caching tarball of preloaded images
I0307 23:39:49.323800   10756 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0307 23:39:49.323800   10756 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0307 23:39:49.324392   10756 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
I0307 23:39:49.327058   10756 start.go:360] acquireMachinesLock for ha-792400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 23:39:49.327123   10756 start.go:364] duration metric: took 64.9µs to acquireMachinesLock for "ha-792400-m02"
I0307 23:39:49.327123   10756 start.go:96] Skipping create...Using existing machine configuration
I0307 23:39:49.327123   10756 fix.go:54] fixHost starting: m02
I0307 23:39:49.327839   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:39:51.299750   10756 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0307 23:39:51.301128   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:39:51.301128   10756 fix.go:112] recreateIfNeeded on ha-792400-m02: state=Stopped err=<nil>
W0307 23:39:51.301128   10756 fix.go:138] unexpected machine state, will restart: <nil>
I0307 23:39:51.304452   10756 out.go:177] * Restarting existing hyperv VM for "ha-792400-m02" ...
I0307 23:39:51.307030   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m02
I0307 23:39:54.170904   10756 main.go:141] libmachine: [stdout =====>] : 
I0307 23:39:54.170979   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:39:54.170979   10756 main.go:141] libmachine: Waiting for host to start...
I0307 23:39:54.171058   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:39:56.233048   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:39:56.233048   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:39:56.233048   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:39:58.587289   10756 main.go:141] libmachine: [stdout =====>] : 
I0307 23:39:58.587289   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:39:59.600996   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:01.666297   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:01.666372   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:01.666372   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:04.010919   10756 main.go:141] libmachine: [stdout =====>] : 
I0307 23:40:04.010919   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:05.013861   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:07.039515   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:07.039561   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:07.039710   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:09.350576   10756 main.go:141] libmachine: [stdout =====>] : 
I0307 23:40:09.350576   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:10.361247   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:12.366311   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:12.366311   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:12.377271   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:14.708727   10756 main.go:141] libmachine: [stdout =====>] : 
I0307 23:40:14.719581   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:15.728697   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:17.760255   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:17.760255   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:17.760584   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:20.060618   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:20.060618   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:20.073370   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:22.007700   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:22.007700   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:22.007700   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:24.310235   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:24.310235   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:24.321925   10756 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
I0307 23:40:24.324912   10756 machine.go:94] provisionDockerMachine start ...
I0307 23:40:24.325041   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:26.266746   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:26.266746   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:26.266746   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:28.586750   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:28.586750   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:28.593885   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:40:28.594278   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:40:28.594278   10756 main.go:141] libmachine: About to run SSH command:
hostname
I0307 23:40:28.735479   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0307 23:40:28.735479   10756 buildroot.go:166] provisioning hostname "ha-792400-m02"
I0307 23:40:28.735479   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:30.660928   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:30.660928   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:30.660928   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:32.978775   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:32.978775   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:32.984498   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:40:32.984932   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:40:32.985000   10756 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-792400-m02 && echo "ha-792400-m02" | sudo tee /etc/hostname
I0307 23:40:33.133205   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m02

                                                
                                                
I0307 23:40:33.133326   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:35.062216   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:35.062216   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:35.062216   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:37.369900   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:37.369900   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:37.375200   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:40:37.376027   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:40:37.376027   10756 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-792400-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-792400-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0307 23:40:37.520807   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0307 23:40:37.520807   10756 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
I0307 23:40:37.520807   10756 buildroot.go:174] setting up certificates
I0307 23:40:37.520807   10756 provision.go:84] configureAuth start
I0307 23:40:37.520807   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:39.461012   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:39.461012   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:39.461012   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:41.807821   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:41.807902   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:41.808023   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:43.803190   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:43.803190   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:43.803190   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:46.180069   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:46.180145   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:46.180145   10756 provision.go:143] copyHostCerts
I0307 23:40:46.180304   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
I0307 23:40:46.180304   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
I0307 23:40:46.180304   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
I0307 23:40:46.181168   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
I0307 23:40:46.182089   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
I0307 23:40:46.182730   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
I0307 23:40:46.182799   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
I0307 23:40:46.182799   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
I0307 23:40:46.183975   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
I0307 23:40:46.184712   10756 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
I0307 23:40:46.184712   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
I0307 23:40:46.184922   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
I0307 23:40:46.185630   10756 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m02 san=[127.0.0.1 172.20.49.16 ha-792400-m02 localhost minikube]
I0307 23:40:46.245544   10756 provision.go:177] copyRemoteCerts
I0307 23:40:46.256267   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 23:40:46.256267   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:48.270259   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:48.270259   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:48.270259   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:50.673958   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:50.673958   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:50.674496   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
I0307 23:40:50.778823   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5224466s)
I0307 23:40:50.778939   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0307 23:40:50.779410   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0307 23:40:50.824446   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0307 23:40:50.824913   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0307 23:40:50.875450   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0307 23:40:50.875876   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0307 23:40:50.917490   10756 provision.go:87] duration metric: took 13.3964869s to configureAuth
I0307 23:40:50.917536   10756 buildroot.go:189] setting minikube options for container-runtime
I0307 23:40:50.917961   10756 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:40:50.917961   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:52.933790   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:52.933790   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:52.934277   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:55.328124   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:55.328124   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:55.333622   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:40:55.333730   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:40:55.333730   10756 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0307 23:40:55.468241   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0307 23:40:55.468334   10756 buildroot.go:70] root file system type: tmpfs
I0307 23:40:55.468535   10756 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0307 23:40:55.468732   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:40:57.499572   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:40:57.500045   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:57.500172   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:40:59.910697   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:40:59.910697   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:40:59.917535   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:40:59.917753   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:40:59.917753   10756 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0307 23:41:00.086185   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0307 23:41:00.086306   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:02.098129   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:02.098529   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:02.098618   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:04.570126   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:04.570892   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:04.575863   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:41:04.576393   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:41:04.576393   10756 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0307 23:41:05.998582   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0307 23:41:05.998582   10756 machine.go:97] duration metric: took 41.6731481s to provisionDockerMachine
I0307 23:41:05.998582   10756 start.go:293] postStartSetup for "ha-792400-m02" (driver="hyperv")
I0307 23:41:05.998582   10756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 23:41:06.010071   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 23:41:06.010071   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:08.043015   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:08.043015   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:08.043079   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:10.504940   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:10.505230   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:10.505631   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
I0307 23:41:10.617613   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6074986s)
I0307 23:41:10.629086   10756 ssh_runner.go:195] Run: cat /etc/os-release
I0307 23:41:10.636005   10756 info.go:137] Remote host: Buildroot 2023.02.9
I0307 23:41:10.636005   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
I0307 23:41:10.636488   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
I0307 23:41:10.637397   10756 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
I0307 23:41:10.637454   10756 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
I0307 23:41:10.648483   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 23:41:10.664662   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
I0307 23:41:10.708865   10756 start.go:296] duration metric: took 4.7102384s for postStartSetup
I0307 23:41:10.719861   10756 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0307 23:41:10.719861   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:12.719223   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:12.719223   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:12.719845   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:15.159256   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:15.159256   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:15.160048   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
I0307 23:41:15.264978   10756 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.5450738s)
I0307 23:41:15.265082   10756 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0307 23:41:15.277296   10756 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0307 23:41:15.346953   10756 fix.go:56] duration metric: took 1m26.0190201s for fixHost
I0307 23:41:15.346953   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:17.349646   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:17.349646   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:17.349646   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:19.802200   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:19.802200   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:19.807827   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:41:19.807970   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:41:19.807970   10756 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0307 23:41:19.949509   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709854879.968169500

                                                
                                                
I0307 23:41:19.949565   10756 fix.go:216] guest clock: 1709854879.968169500
I0307 23:41:19.949565   10756 fix.go:229] Guest: 2024-03-07 23:41:19.9681695 +0000 UTC Remote: 2024-03-07 23:41:15.3469538 +0000 UTC m=+88.184703901 (delta=4.6212157s)
I0307 23:41:19.949677   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:21.967095   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:21.967095   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:21.968122   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:24.375041   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:24.375328   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:24.380160   10756 main.go:141] libmachine: Using SSH client type: native
I0307 23:41:24.380631   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.49.16 22 <nil> <nil>}
I0307 23:41:24.380631   10756 main.go:141] libmachine: About to run SSH command:
sudo date -s @1709854879
I0307 23:41:24.528349   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:41:19 UTC 2024

                                                
                                                
I0307 23:41:24.528349   10756 fix.go:236] clock set: Thu Mar  7 23:41:19 UTC 2024
(err=<nil>)
I0307 23:41:24.528349   10756 start.go:83] releasing machines lock for "ha-792400-m02", held for 1m35.2003283s
I0307 23:41:24.528349   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:26.532528   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:26.532528   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:26.532528   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:28.998342   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:28.998342   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:29.002942   10756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0307 23:41:29.003263   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:29.013990   10756 ssh_runner.go:195] Run: systemctl --version
I0307 23:41:29.013990   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
I0307 23:41:31.089872   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:31.089948   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:31.090037   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:31.090037   10756 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:41:31.090037   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:31.090037   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
I0307 23:41:33.616195   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:33.616195   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:33.617554   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
I0307 23:41:33.639076   10756 main.go:141] libmachine: [stdout =====>] : 172.20.49.16

                                                
                                                
I0307 23:41:33.639076   10756 main.go:141] libmachine: [stderr =====>] : 
I0307 23:41:33.639546   10756 sshutil.go:53] new ssh client: &{IP:172.20.49.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
I0307 23:41:33.789080   10756 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7860929s)
I0307 23:41:33.789080   10756 ssh_runner.go:235] Completed: systemctl --version: (4.7750448s)
I0307 23:41:33.800656   10756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0307 23:41:33.809345   10756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 23:41:33.819577   10756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 23:41:33.846131   10756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 23:41:33.846131   10756 start.go:494] detecting cgroup driver to use...
I0307 23:41:33.846131   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 23:41:33.890381   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0307 23:41:33.922136   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 23:41:33.943734   10756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 23:41:33.955732   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 23:41:33.985777   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 23:41:34.017320   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 23:41:34.046638   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 23:41:34.075706   10756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 23:41:34.106481   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 23:41:34.134096   10756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 23:41:34.162773   10756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 23:41:34.191141   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 23:41:34.369157   10756 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 23:41:34.401767   10756 start.go:494] detecting cgroup driver to use...
I0307 23:41:34.413672   10756 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0307 23:41:34.447668   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 23:41:34.479333   10756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0307 23:41:34.520456   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0307 23:41:34.552993   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 23:41:34.590689   10756 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 23:41:34.649071   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 23:41:34.671899   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 23:41:34.715220   10756 ssh_runner.go:195] Run: which cri-dockerd
I0307 23:41:34.732660   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0307 23:41:34.749900   10756 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0307 23:41:34.790381   10756 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0307 23:41:34.977430   10756 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0307 23:41:35.161528   10756 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0307 23:41:35.161528   10756 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0307 23:41:35.203231   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 23:41:35.387638   10756 ssh_runner.go:195] Run: sudo systemctl restart docker
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-792400 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (107.5µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (153.4µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (116.1µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (120.2µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: context deadline exceeded (115.5µs)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-792400 -n ha-792400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-792400 -n ha-792400: (11.5616809s)
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 logs -n 25: (8.2779868s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:34 UTC | 07 Mar 24 23:34 UTC |
	|         | ha-792400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:34 UTC | 07 Mar 24 23:34 UTC |
	|         | ha-792400:/home/docker/cp-test_ha-792400-m03_ha-792400.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:34 UTC | 07 Mar 24 23:34 UTC |
	|         | ha-792400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400 sudo cat                                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:34 UTC | 07 Mar 24 23:34 UTC |
	|         | /home/docker/cp-test_ha-792400-m03_ha-792400.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:34 UTC | 07 Mar 24 23:35 UTC |
	|         | ha-792400-m02:/home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:35 UTC | 07 Mar 24 23:35 UTC |
	|         | ha-792400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400-m02 sudo cat                                                                                  | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:35 UTC | 07 Mar 24 23:35 UTC |
	|         | /home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:35 UTC | 07 Mar 24 23:35 UTC |
	|         | ha-792400-m04:/home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:35 UTC | 07 Mar 24 23:35 UTC |
	|         | ha-792400-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400-m04 sudo cat                                                                                  | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:35 UTC | 07 Mar 24 23:36 UTC |
	|         | /home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-792400 cp testdata\cp-test.txt                                                                                        | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:36 UTC |
	|         | ha-792400-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:36 UTC |
	|         | ha-792400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:36 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:36 UTC |
	|         | ha-792400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:36 UTC |
	|         | ha-792400:/home/docker/cp-test_ha-792400-m04_ha-792400.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:36 UTC | 07 Mar 24 23:37 UTC |
	|         | ha-792400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400 sudo cat                                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:37 UTC | 07 Mar 24 23:37 UTC |
	|         | /home/docker/cp-test_ha-792400-m04_ha-792400.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:37 UTC | 07 Mar 24 23:37 UTC |
	|         | ha-792400-m02:/home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:37 UTC | 07 Mar 24 23:37 UTC |
	|         | ha-792400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400-m02 sudo cat                                                                                  | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:37 UTC | 07 Mar 24 23:37 UTC |
	|         | /home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt                                                                      | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:37 UTC | 07 Mar 24 23:38 UTC |
	|         | ha-792400-m03:/home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n                                                                                                         | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:38 UTC | 07 Mar 24 23:38 UTC |
	|         | ha-792400-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-792400 ssh -n ha-792400-m03 sudo cat                                                                                  | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:38 UTC | 07 Mar 24 23:38 UTC |
	|         | /home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-792400 node stop m02 -v=7                                                                                             | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:38 UTC | 07 Mar 24 23:38 UTC |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	| node    | ha-792400 node start m02 -v=7                                                                                            | ha-792400 | minikube7\jenkins | v1.32.0 | 07 Mar 24 23:39 UTC |                     |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 23:11:38
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 23:11:38.444444    6816 out.go:291] Setting OutFile to fd 1008 ...
	I0307 23:11:38.444444    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:11:38.444444    6816 out.go:304] Setting ErrFile to fd 808...
	I0307 23:11:38.444444    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:11:38.468066    6816 out.go:298] Setting JSON to false
	I0307 23:11:38.469810    6816 start.go:129] hostinfo: {"hostname":"minikube7","uptime":12052,"bootTime":1709841045,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 23:11:38.469810    6816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 23:11:38.472877    6816 out.go:177] * [ha-792400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 23:11:38.479638    6816 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:11:38.478397    6816 notify.go:220] Checking for updates...
	I0307 23:11:38.482239    6816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 23:11:38.484603    6816 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 23:11:38.487541    6816 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 23:11:38.489679    6816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 23:11:38.493211    6816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 23:11:43.038717    6816 out.go:177] * Using the hyperv driver based on user configuration
	I0307 23:11:43.045309    6816 start.go:297] selected driver: hyperv
	I0307 23:11:43.045309    6816 start.go:901] validating driver "hyperv" against <nil>
	I0307 23:11:43.045309    6816 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 23:11:43.091556    6816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 23:11:43.092441    6816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:11:43.092441    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:11:43.092441    6816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0307 23:11:43.092441    6816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 23:11:43.092441    6816 start.go:340] cluster config:
	{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:11:43.093023    6816 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 23:11:43.098711    6816 out.go:177] * Starting "ha-792400" primary control-plane node in "ha-792400" cluster
	I0307 23:11:43.099873    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:11:43.099873    6816 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 23:11:43.099873    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:11:43.102273    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:11:43.102483    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:11:43.102664    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:11:43.103166    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json: {Name:mkf5192d5b57415acf5d5449be46341d91e1b9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:11:43.103954    6816 start.go:360] acquireMachinesLock for ha-792400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:11:43.104387    6816 start.go:364] duration metric: took 388.2µs to acquireMachinesLock for "ha-792400"
	I0307 23:11:43.104505    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:11:43.104505    6816 start.go:125] createHost starting for "" (driver="hyperv")
	I0307 23:11:43.105687    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:11:43.107584    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:11:43.107584    6816 client.go:168] LocalClient.Create starting
	I0307 23:11:43.109801    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:11:43.110323    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:11:43.110323    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:11:43.110548    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:11:43.110672    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:11:43.110672    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:11:43.110672    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:11:44.801566    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:11:44.801566    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:44.810549    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:11:46.263991    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:11:46.263991    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:46.264332    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:47.494057    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:11:50.483353    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:11:50.483541    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:50.485413    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:11:50.977315    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:11:51.085998    6816 main.go:141] libmachine: Creating VM...
	I0307 23:11:51.085998    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:11:53.479107    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:11:53.583884    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:53.598492    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:11:53.598799    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:11:55.078860    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:11:55.090174    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:55.090266    6816 main.go:141] libmachine: Creating VHD
	I0307 23:11:55.090362    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:11:58.356006    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 862C84AF-F98E-4909-8B61-C2162CA03912
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:11:58.356006    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:11:58.356006    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:11:58.356006    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:11:58.362814    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:12:01.153134    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:01.162819    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:01.162819    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd' -SizeBytes 20000MB
	I0307 23:12:03.418843    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:03.428215    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:03.428286    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:12:06.566864    6816 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-792400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:12:06.566864    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:06.566955    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400 -DynamicMemoryEnabled $false
	I0307 23:12:08.431004    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:08.431004    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:08.431266    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400 -Count 2
	I0307 23:12:10.270550    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:10.270550    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:10.280036    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\boot2docker.iso'
	I0307 23:12:12.418383    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:12.418383    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:12.428259    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\disk.vhd'
	I0307 23:12:14.669648    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:14.669648    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:14.674712    6816 main.go:141] libmachine: Starting VM...
	I0307 23:12:14.674712    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:17.329220    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:12:17.329220    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:19.249082    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:19.249082    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:19.256896    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:21.396309    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:21.396309    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:22.409940    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:24.349508    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:24.353234    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:24.353315    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:26.605658    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:26.605658    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:27.609373    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:29.477167    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:29.487971    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:29.487971    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:31.709696    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:31.709696    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:32.716688    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:34.608950    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:34.609709    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:34.609709    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:36.872811    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:12:36.872811    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:37.885900    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:39.837462    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:39.837462    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:39.848153    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:41.995938    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:41.995938    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:42.006202    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:43.776393    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:43.787163    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:43.787256    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:12:43.787410    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:45.572230    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:45.572230    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:45.582468    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:47.734483    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:47.745011    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:47.750118    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:47.757774    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:47.757774    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:12:47.875084    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:12:47.875084    6816 buildroot.go:166] provisioning hostname "ha-792400"
	I0307 23:12:47.875155    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:49.660460    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:49.660556    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:49.660556    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:51.799596    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:51.799596    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:51.804596    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:51.805295    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:51.805295    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400 && echo "ha-792400" | sudo tee /etc/hostname
	I0307 23:12:51.941788    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400
	
	I0307 23:12:51.941788    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:53.726495    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:53.732406    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:53.732461    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:55.876389    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:55.886880    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:55.892693    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:12:55.892850    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:12:55.892850    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:12:56.022802    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:12:56.022872    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:12:56.022953    6816 buildroot.go:174] setting up certificates
	I0307 23:12:56.022997    6816 provision.go:84] configureAuth start
	I0307 23:12:56.023069    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:12:57.817086    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:12:57.817086    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:57.817200    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:12:59.919167    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:12:59.929688    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:12:59.929688    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:01.729474    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:01.733698    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:01.733698    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:03.866168    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:03.866168    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:03.866168    6816 provision.go:143] copyHostCerts
	I0307 23:13:03.876430    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:13:03.876607    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:13:03.876607    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:13:03.877172    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:13:03.878554    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:13:03.878768    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:13:03.878835    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:13:03.878967    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:13:03.879760    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:13:03.880285    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:13:03.880285    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:13:03.880413    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:13:03.881793    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400 san=[127.0.0.1 172.20.58.169 ha-792400 localhost minikube]
	I0307 23:13:04.084089    6816 provision.go:177] copyRemoteCerts
	I0307 23:13:04.107922    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:13:04.107922    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:05.913692    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:05.923745    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:05.923745    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:08.096603    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:08.096603    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:08.107950    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:08.208411    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1004505s)
	I0307 23:13:08.208411    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:13:08.209363    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0307 23:13:08.248004    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:13:08.248004    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 23:13:08.288127    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:13:08.288127    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:13:08.326817    6816 provision.go:87] duration metric: took 12.3036685s to configureAuth
	I0307 23:13:08.326919    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:13:08.327541    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:13:08.327646    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:10.078604    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:10.088591    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:10.088591    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:12.193804    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:12.193804    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:12.208519    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:12.209265    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:12.209265    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:13:12.327525    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:13:12.327525    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:13:12.327843    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:13:12.327933    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:14.078367    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:14.078367    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:14.088535    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:16.199916    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:16.199916    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:16.204529    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:16.205229    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:16.205229    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:13:16.346259    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:13:16.346399    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:18.101170    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:18.101170    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:18.101663    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:20.202574    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:20.212237    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:20.217283    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:20.217283    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:20.217283    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:13:21.247313    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:13:21.247313    6816 machine.go:97] duration metric: took 37.4597052s to provisionDockerMachine
	I0307 23:13:21.247313    6816 client.go:171] duration metric: took 1m38.1388065s to LocalClient.Create
	I0307 23:13:21.247313    6816 start.go:167] duration metric: took 1m38.1388065s to libmachine.API.Create "ha-792400"
	I0307 23:13:21.247313    6816 start.go:293] postStartSetup for "ha-792400" (driver="hyperv")
	I0307 23:13:21.247313    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:13:21.258925    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:13:21.258925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:23.012360    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:23.022906    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:23.023018    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:25.142571    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:25.142571    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:25.153620    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:25.249260    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (3.9902976s)
	I0307 23:13:25.261757    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:13:25.267450    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:13:25.267538    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:13:25.267538    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:13:25.268276    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:13:25.268276    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:13:25.278228    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:13:25.296329    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:13:25.333880    6816 start.go:296] duration metric: took 4.0865291s for postStartSetup
	I0307 23:13:25.336578    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:27.092752    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:27.092752    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:27.102159    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:29.277102    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:29.277102    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:29.277348    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:13:29.279822    6816 start.go:128] duration metric: took 1m46.1743186s to createHost
	I0307 23:13:29.279945    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:31.033942    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:31.044144    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:31.044263    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:33.192048    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:33.202085    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:33.206574    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:33.207195    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:33.207195    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:13:33.325040    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853213.334679515
	
	I0307 23:13:33.325040    6816 fix.go:216] guest clock: 1709853213.334679515
	I0307 23:13:33.325040    6816 fix.go:229] Guest: 2024-03-07 23:13:33.334679515 +0000 UTC Remote: 2024-03-07 23:13:29.279945 +0000 UTC m=+110.991515101 (delta=4.054734515s)
	I0307 23:13:33.325040    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:35.062598    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:35.062598    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:35.072444    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:37.201609    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:37.211461    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:37.216395    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:13:37.217074    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.58.169 22 <nil> <nil>}
	I0307 23:13:37.217074    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853213
	I0307 23:13:37.346236    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:13:33 UTC 2024
	
	I0307 23:13:37.346292    6816 fix.go:236] clock set: Thu Mar  7 23:13:33 UTC 2024
	 (err=<nil>)
	I0307 23:13:37.346292    6816 start.go:83] releasing machines lock for "ha-792400", held for 1m54.2408316s
	I0307 23:13:37.346423    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:39.148042    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:41.286304    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:41.286377    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:41.290287    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:13:41.290361    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:41.301792    6816 ssh_runner.go:195] Run: cat /version.json
	I0307 23:13:41.301792    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:43.275293    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:43.276014    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:13:43.276014    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:43.276251    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:13:45.586568    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:45.596181    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:45.596546    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:45.614837    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:13:45.617399    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:13:45.617480    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:13:45.756146    6816 ssh_runner.go:235] Completed: cat /version.json: (4.4543118s)
	I0307 23:13:45.756146    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4657425s)
	I0307 23:13:45.768143    6816 ssh_runner.go:195] Run: systemctl --version
	I0307 23:13:45.786269    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0307 23:13:45.794327    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:13:45.803784    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:13:45.827163    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:13:45.827163    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:13:45.827472    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:13:45.864234    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:13:45.890427    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:13:45.907042    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:13:45.917584    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:13:45.943439    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:13:45.970338    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:13:45.999221    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:13:46.025444    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:13:46.053524    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:13:46.081965    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:13:46.107589    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:13:46.134300    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:46.290599    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:13:46.317756    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:13:46.327291    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:13:46.359169    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:13:46.390019    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:13:46.419504    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:13:46.450319    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:13:46.479165    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:13:46.533953    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:13:46.552603    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:13:46.591865    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:13:46.607007    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:13:46.622602    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:13:46.656531    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:13:46.832794    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:13:46.970730    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:13:46.971054    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:13:47.007830    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:47.183904    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:13:48.682406    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4984878s)
	I0307 23:13:48.692307    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:13:48.725390    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:13:48.756027    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:13:48.926521    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:13:49.096696    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:49.261907    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:13:49.297574    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:13:49.327032    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:13:49.492147    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:13:49.578133    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:13:49.591295    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:13:49.599325    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:13:49.609641    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:13:49.624537    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:13:49.685430    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:13:49.693286    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:13:49.734500    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:13:49.764422    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:13:49.764422    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:13:49.768694    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:13:49.771439    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:13:49.771439    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:13:49.777645    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:13:49.785152    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:13:49.812482    6816 kubeadm.go:877] updating cluster {Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 23:13:49.812482    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:13:49.821469    6816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:13:49.842626    6816 docker.go:685] Got preloaded images: 
	I0307 23:13:49.842626    6816 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0307 23:13:49.853493    6816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 23:13:49.884585    6816 ssh_runner.go:195] Run: which lz4
	I0307 23:13:49.890145    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0307 23:13:49.899438    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0307 23:13:49.905880    6816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0307 23:13:49.906006    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0307 23:13:52.205762    6816 docker.go:649] duration metric: took 2.315099s to copy over tarball
	I0307 23:13:52.215654    6816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0307 23:14:02.550024    6816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.3342156s)
	I0307 23:14:02.550078    6816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0307 23:14:02.611691    6816 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0307 23:14:02.628443    6816 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0307 23:14:02.665335    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:14:02.824847    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:14:05.048317    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2233337s)
	I0307 23:14:05.056611    6816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0307 23:14:05.084587    6816 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0307 23:14:05.084587    6816 cache_images.go:84] Images are preloaded, skipping loading
	I0307 23:14:05.084587    6816 kubeadm.go:928] updating node { 172.20.58.169 8443 v1.28.4 docker true true} ...
	I0307 23:14:05.084587    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.58.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:14:05.095197    6816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0307 23:14:05.127094    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:14:05.127148    6816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 23:14:05.127232    6816 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 23:14:05.127322    6816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.58.169 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-792400 NodeName:ha-792400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.58.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.58.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 23:14:05.127541    6816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.58.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-792400"
	  kubeletExtraArgs:
	    node-ip: 172.20.58.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.58.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 23:14:05.127541    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:14:05.127541    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:14:05.138269    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:14:05.152709    6816 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 23:14:05.163146    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0307 23:14:05.176907    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0307 23:14:05.201950    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:14:05.234890    6816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0307 23:14:05.261326    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:14:05.296249    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:14:05.299531    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:14:05.328459    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:14:05.480656    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:14:05.503011    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.58.169
	I0307 23:14:05.503011    6816 certs.go:194] generating shared ca certs ...
	I0307 23:14:05.503112    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.503303    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:14:05.504213    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:14:05.504554    6816 certs.go:256] generating profile certs ...
	I0307 23:14:05.505468    6816 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:14:05.505727    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt with IP's: []
	I0307 23:14:05.765614    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt ...
	I0307 23:14:05.765614    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.crt: {Name:mk2eea3648a63e5ca5595a6e8e677d21f3c19bda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.772246    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key ...
	I0307 23:14:05.772246    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key: {Name:mkb2a78624bba117cfb5b07a7e10b0d36117f24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.773120    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409
	I0307 23:14:05.774137    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.63.254]
	I0307 23:14:05.848810    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 ...
	I0307 23:14:05.848810    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409: {Name:mk867b12391832dd101173d28ada253452002c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.856919    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409 ...
	I0307 23:14:05.856919    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409: {Name:mk82310bcbd37aec0078deb26f85b7bb3c1ec537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:05.856919    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.933de409 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:14:05.859219    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.933de409 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:14:05.868200    6816 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:14:05.868200    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt with IP's: []
	I0307 23:14:06.146600    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt ...
	I0307 23:14:06.146600    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt: {Name:mka66b41c9bd0c49bfa9652075c50a9e4f19325d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:06.150510    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key ...
	I0307 23:14:06.150510    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key: {Name:mk5b5c5bc1a9b79de3e7b4b4d8fc04996f0e924f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:06.151831    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:14:06.152936    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:14:06.153140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:14:06.153385    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:14:06.153540    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:14:06.156945    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:14:06.161827    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:14:06.162560    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:14:06.162560    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:14:06.162713    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:14:06.163446    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:14:06.163446    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.164046    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.164184    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.164328    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:14:06.204918    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:14:06.247799    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:14:06.287712    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:14:06.325095    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0307 23:14:06.361866    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 23:14:06.401190    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:14:06.437684    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:14:06.474974    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:14:06.514702    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:14:06.554044    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:14:06.600945    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 23:14:06.639813    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:14:06.656337    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:14:06.689513    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.696805    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.706132    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:14:06.722940    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:14:06.754742    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:14:06.783578    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.789740    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.800381    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:14:06.820037    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:14:06.845681    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:14:06.873141    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.879435    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.890463    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:14:06.910601    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:14:06.937887    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:14:06.944603    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:14:06.944941    6816 kubeadm.go:391] StartCluster: {Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:14:06.953439    6816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0307 23:14:06.985616    6816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 23:14:07.012715    6816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 23:14:07.038206    6816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 23:14:07.053677    6816 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 23:14:07.053764    6816 kubeadm.go:156] found existing configuration files:
	
	I0307 23:14:07.065239    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 23:14:07.078895    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 23:14:07.091353    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 23:14:07.116598    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 23:14:07.132755    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 23:14:07.143958    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 23:14:07.169743    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 23:14:07.184354    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 23:14:07.194962    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 23:14:07.221942    6816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 23:14:07.239432    6816 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 23:14:07.252044    6816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 23:14:07.267389    6816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0307 23:14:07.691062    6816 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 23:14:20.400610    6816 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 23:14:20.400842    6816 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 23:14:20.400936    6816 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 23:14:20.400936    6816 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 23:14:20.401548    6816 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 23:14:20.401812    6816 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 23:14:20.405108    6816 out.go:204]   - Generating certificates and keys ...
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 23:14:20.405277    6816 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 23:14:20.405939    6816 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 23:14:20.406140    6816 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 23:14:20.406314    6816 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 23:14:20.406352    6816 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-792400 localhost] and IPs [172.20.58.169 127.0.0.1 ::1]
	I0307 23:14:20.406352    6816 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 23:14:20.406883    6816 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-792400 localhost] and IPs [172.20.58.169 127.0.0.1 ::1]
	I0307 23:14:20.407173    6816 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 23:14:20.407322    6816 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 23:14:20.407322    6816 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 23:14:20.407853    6816 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 23:14:20.408004    6816 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 23:14:20.408165    6816 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 23:14:20.408391    6816 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 23:14:20.408391    6816 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 23:14:20.414321    6816 out.go:204]   - Booting up control plane ...
	I0307 23:14:20.414651    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 23:14:20.414907    6816 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 23:14:20.415533    6816 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 23:14:20.415639    6816 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 23:14:20.415639    6816 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 23:14:20.415639    6816 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.585889 seconds
	I0307 23:14:20.416439    6816 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 23:14:20.416439    6816 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 23:14:20.416439    6816 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 23:14:20.417228    6816 kubeadm.go:309] [mark-control-plane] Marking the node ha-792400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 23:14:20.417228    6816 kubeadm.go:309] [bootstrap-token] Using token: dqdu0z.9ukmcum3jye837js
	I0307 23:14:20.419595    6816 out.go:204]   - Configuring RBAC rules ...
	I0307 23:14:20.421713    6816 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 23:14:20.421980    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 23:14:20.422272    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 23:14:20.422662    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 23:14:20.422942    6816 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 23:14:20.422942    6816 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 23:14:20.422942    6816 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 23:14:20.422942    6816 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 23:14:20.422942    6816 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 23:14:20.422942    6816 kubeadm.go:309] 
	I0307 23:14:20.422942    6816 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 23:14:20.422942    6816 kubeadm.go:309] 
	I0307 23:14:20.424056    6816 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 23:14:20.424126    6816 kubeadm.go:309] 
	I0307 23:14:20.424171    6816 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 23:14:20.424271    6816 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 23:14:20.424537    6816 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 23:14:20.424537    6816 kubeadm.go:309] 
	I0307 23:14:20.424640    6816 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 23:14:20.424698    6816 kubeadm.go:309] 
	I0307 23:14:20.424698    6816 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 23:14:20.424698    6816 kubeadm.go:309] 
	I0307 23:14:20.424698    6816 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 23:14:20.424698    6816 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 23:14:20.425398    6816 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 23:14:20.425398    6816 kubeadm.go:309] 
	I0307 23:14:20.425570    6816 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 23:14:20.425730    6816 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 23:14:20.425730    6816 kubeadm.go:309] 
	I0307 23:14:20.425954    6816 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dqdu0z.9ukmcum3jye837js \
	I0307 23:14:20.426178    6816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0307 23:14:20.426178    6816 kubeadm.go:309] 	--control-plane 
	I0307 23:14:20.426400    6816 kubeadm.go:309] 
	I0307 23:14:20.426462    6816 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 23:14:20.426462    6816 kubeadm.go:309] 
	I0307 23:14:20.426462    6816 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dqdu0z.9ukmcum3jye837js \
	I0307 23:14:20.426462    6816 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0307 23:14:20.427005    6816 cni.go:84] Creating CNI manager for ""
	I0307 23:14:20.427005    6816 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0307 23:14:20.427658    6816 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 23:14:20.434145    6816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 23:14:20.449195    6816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 23:14:20.449254    6816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 23:14:20.518865    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 23:14:21.730622    6816 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2117461s)
	I0307 23:14:21.730622    6816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 23:14:21.752041    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400 minikube.k8s.io/updated_at=2024_03_07T23_14_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=true
	I0307 23:14:21.752041    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:21.773281    6816 ops.go:34] apiserver oom_adj: -16
	I0307 23:14:21.931549    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:22.442053    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:22.944573    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:23.447513    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:23.942422    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:24.431993    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:24.932618    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:25.435650    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:25.946609    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:26.441602    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:26.930610    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:27.440213    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:27.934655    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:28.440075    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:28.945351    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:29.432042    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:29.948055    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:30.433600    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:30.941225    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:31.434516    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:31.939893    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:32.437839    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:32.944464    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:33.446697    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 23:14:33.655610    6816 kubeadm.go:1106] duration metric: took 11.9248398s to wait for elevateKubeSystemPrivileges
	W0307 23:14:33.655706    6816 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 23:14:33.655706    6816 kubeadm.go:393] duration metric: took 26.7105127s to StartCluster
	I0307 23:14:33.655706    6816 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:33.655706    6816 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:14:33.657520    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:14:33.659437    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 23:14:33.659510    6816 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 23:14:33.659673    6816 addons.go:69] Setting storage-provisioner=true in profile "ha-792400"
	I0307 23:14:33.659437    6816 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:14:33.659872    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:14:33.659787    6816 addons.go:234] Setting addon storage-provisioner=true in "ha-792400"
	I0307 23:14:33.659787    6816 addons.go:69] Setting default-storageclass=true in profile "ha-792400"
	I0307 23:14:33.659904    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:14:33.659904    6816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-792400"
	I0307 23:14:33.659904    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:14:33.660599    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:33.661389    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:33.867642    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 23:14:34.425892    6816 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0307 23:14:35.740278    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:35.740278    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:35.746244    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:14:35.747445    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0307 23:14:35.748858    6816 cert_rotation.go:137] Starting client certificate rotation controller
	I0307 23:14:35.748858    6816 addons.go:234] Setting addon default-storageclass=true in "ha-792400"
	I0307 23:14:35.748858    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:14:35.750089    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:35.758592    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:35.758592    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:35.763580    6816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 23:14:35.766442    6816 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:14:35.766524    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 23:14:35.766593    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:37.894961    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:37.898775    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:37.899015    6816 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 23:14:37.899054    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 23:14:37.899091    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:14:37.958866    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:37.958866    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:37.970527    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:40.021257    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:14:40.569173    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:14:40.571244    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:40.571748    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:14:40.729880    6816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 23:14:42.386869    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:14:42.395670    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:42.396080    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:14:42.518775    6816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 23:14:42.765388    6816 round_trippers.go:463] GET https://172.20.63.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0307 23:14:42.765388    6816 round_trippers.go:469] Request Headers:
	I0307 23:14:42.765982    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:14:42.766034    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:14:42.778156    6816 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0307 23:14:42.781037    6816 round_trippers.go:463] PUT https://172.20.63.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0307 23:14:42.781107    6816 round_trippers.go:469] Request Headers:
	I0307 23:14:42.781107    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:14:42.781179    6816 round_trippers.go:473]     Content-Type: application/json
	I0307 23:14:42.781179    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:14:42.784394    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:14:42.793319    6816 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0307 23:14:42.795356    6816 addons.go:505] duration metric: took 9.1358319s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0307 23:14:42.795875    6816 start.go:245] waiting for cluster config update ...
	I0307 23:14:42.795875    6816 start.go:254] writing updated cluster config ...
	I0307 23:14:42.802204    6816 out.go:177] 
	I0307 23:14:42.807394    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:14:42.807394    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:14:42.810675    6816 out.go:177] * Starting "ha-792400-m02" control-plane node in "ha-792400" cluster
	I0307 23:14:42.817484    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:14:42.817484    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:14:42.818604    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:14:42.818668    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:14:42.818668    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:14:42.821630    6816 start.go:360] acquireMachinesLock for ha-792400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:14:42.821630    6816 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-792400-m02"
	I0307 23:14:42.821630    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:14:42.822280    6816 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0307 23:14:42.825022    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:14:42.825721    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:14:42.825778    6816 client.go:168] LocalClient.Create starting
	I0307 23:14:42.825778    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:14:42.826307    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:14:42.826307    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:14:42.826586    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:14:42.826894    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:14:42.826894    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:14:42.827018    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:44.516776    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:14:46.165702    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:14:46.165702    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:46.170920    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:14:47.544367    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:14:47.544367    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:47.552645    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:14:50.698879    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:14:50.698879    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:50.701118    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:14:51.196754    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:14:51.340808    6816 main.go:141] libmachine: Creating VM...
	I0307 23:14:51.340808    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:14:53.917171    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:14:53.917171    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:53.927415    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:14:53.927531    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:14:55.467393    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:14:55.474251    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:55.474251    6816 main.go:141] libmachine: Creating VHD
	I0307 23:14:55.474251    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:14:58.864246    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 73042B3D-DA9F-4F61-85B6-78EDA780FF77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:14:58.864384    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:14:58.864438    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:14:58.864503    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:14:58.873715    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:15:01.812254    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:01.823422    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:01.823422    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd' -SizeBytes 20000MB
	I0307 23:15:04.160569    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:04.170334    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:04.170334    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:15:07.381123    6816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-792400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:15:07.392760    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:07.393051    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400-m02 -DynamicMemoryEnabled $false
	I0307 23:15:09.345142    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:09.354755    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:09.354953    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400-m02 -Count 2
	I0307 23:15:11.252413    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:11.263090    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:11.263090    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\boot2docker.iso'
	I0307 23:15:13.531824    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:13.543517    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:13.543628    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\disk.vhd'
	I0307 23:15:15.876853    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:15.876853    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:15.876853    6816 main.go:141] libmachine: Starting VM...
	I0307 23:15:15.877088    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m02
	I0307 23:15:18.644663    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:18.654809    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:18.654809    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:15:18.654873    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:20.721909    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:20.722463    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:20.722463    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:22.966802    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:22.966802    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:23.982521    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:25.980713    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:25.980713    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:25.984996    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:28.271559    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:28.271559    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:29.283950    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:31.225122    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:33.571679    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:33.572186    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:34.574478    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:36.575528    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:38.884082    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:15:38.884082    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:39.885882    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:42.004607    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:42.004708    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:42.004766    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:44.371640    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:44.371640    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:44.371742    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:46.347939    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:46.347939    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:46.347939    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:15:46.349005    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:48.319286    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:48.320206    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:48.320206    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:50.703007    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:50.703314    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:50.708067    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:50.708860    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:50.708860    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:15:50.844556    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:15:50.844556    6816 buildroot.go:166] provisioning hostname "ha-792400-m02"
	I0307 23:15:50.844556    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:52.799066    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:52.799392    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:52.799508    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:55.179428    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:55.179617    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:55.185033    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:55.185169    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:55.185169    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400-m02 && echo "ha-792400-m02" | sudo tee /etc/hostname
	I0307 23:15:55.345388    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m02
	
	I0307 23:15:55.345388    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:15:57.321881    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:15:57.321881    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:57.322386    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:15:59.714125    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:15:59.715122    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:15:59.720428    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:15:59.721066    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:15:59.721066    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:15:59.866659    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:15:59.866659    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:15:59.866659    6816 buildroot.go:174] setting up certificates
	I0307 23:15:59.866659    6816 provision.go:84] configureAuth start
	I0307 23:15:59.866659    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:01.829342    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:04.223265    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:04.223265    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:04.223412    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:06.195086    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:06.195086    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:06.195844    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:08.542880    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:08.542880    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:08.542880    6816 provision.go:143] copyHostCerts
	I0307 23:16:08.543995    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:16:08.544211    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:16:08.544281    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:16:08.544617    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:16:08.545699    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:16:08.546142    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:16:08.546142    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:16:08.546499    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:16:08.547411    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:16:08.547701    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:16:08.547806    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:16:08.548003    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:16:08.549065    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m02 san=[127.0.0.1 172.20.50.199 ha-792400-m02 localhost minikube]
	I0307 23:16:08.608186    6816 provision.go:177] copyRemoteCerts
	I0307 23:16:08.622165    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:16:08.623159    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:10.580487    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:10.581554    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:10.581649    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:12.891768    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:12.892519    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:12.892919    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:12.993551    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3703511s)
	I0307 23:16:12.993551    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:16:12.993551    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:16:13.036249    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:16:13.036653    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 23:16:13.080724    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:16:13.081128    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 23:16:13.127500    6816 provision.go:87] duration metric: took 13.2607159s to configureAuth
	I0307 23:16:13.127580    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:16:13.127715    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:16:13.127715    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:15.102405    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:17.470130    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:17.470202    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:17.474962    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:17.474962    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:17.474962    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:16:17.615256    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:16:17.615256    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:16:17.615256    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:16:17.615256    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:19.538404    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:19.538404    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:19.539285    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:21.844653    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:21.845037    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:21.850386    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:21.850548    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:21.850548    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.58.169"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:16:22.016951    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.58.169
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:16:22.016951    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:24.033359    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:24.034400    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:24.034612    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:26.408882    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:26.408948    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:26.413873    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:26.414389    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:26.414454    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:16:27.528826    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:16:27.528826    6816 machine.go:97] duration metric: took 41.1804995s to provisionDockerMachine
	I0307 23:16:27.528826    6816 client.go:171] duration metric: took 1m44.7020574s to LocalClient.Create
	I0307 23:16:27.528826    6816 start.go:167] duration metric: took 1m44.702114s to libmachine.API.Create "ha-792400"
	I0307 23:16:27.528826    6816 start.go:293] postStartSetup for "ha-792400-m02" (driver="hyperv")
	I0307 23:16:27.528826    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:16:27.544381    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:16:27.544381    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:29.593248    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:29.593317    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:29.593372    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:31.941723    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:31.941723    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:31.942269    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:32.053131    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5087073s)
	I0307 23:16:32.065207    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:16:32.071322    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:16:32.071322    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:16:32.071852    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:16:32.073063    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:16:32.073129    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:16:32.084755    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:16:32.102124    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:16:32.142944    6816 start.go:296] duration metric: took 4.6140738s for postStartSetup
	I0307 23:16:32.146235    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:34.136973    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:34.136973    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:34.137054    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:36.511801    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:36.511854    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:36.511854    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:16:36.514196    6816 start.go:128] duration metric: took 1m53.6908405s to createHost
	I0307 23:16:36.514300    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:38.453391    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:38.453391    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:38.453995    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:40.751969    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:40.751969    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:40.757047    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:40.757047    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:40.757638    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:16:40.892074    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853400.904284684
	
	I0307 23:16:40.892167    6816 fix.go:216] guest clock: 1709853400.904284684
	I0307 23:16:40.892167    6816 fix.go:229] Guest: 2024-03-07 23:16:40.904284684 +0000 UTC Remote: 2024-03-07 23:16:36.5143005 +0000 UTC m=+298.224101001 (delta=4.389984184s)
	I0307 23:16:40.892245    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:42.857446    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:42.857540    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:42.857609    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:45.215799    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:45.216183    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:45.221016    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:16:45.222059    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.199 22 <nil> <nil>}
	I0307 23:16:45.222059    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853400
	I0307 23:16:45.367118    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:16:40 UTC 2024
	
	I0307 23:16:45.367118    6816 fix.go:236] clock set: Thu Mar  7 23:16:40 UTC 2024
	 (err=<nil>)
	I0307 23:16:45.367118    6816 start.go:83] releasing machines lock for "ha-792400-m02", held for 2m2.5443291s
	I0307 23:16:45.367414    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:47.295042    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:49.636476    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:49.637403    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:49.641195    6816 out.go:177] * Found network options:
	I0307 23:16:49.644200    6816 out.go:177]   - NO_PROXY=172.20.58.169
	W0307 23:16:49.646494    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:16:49.648507    6816 out.go:177]   - NO_PROXY=172.20.58.169
	W0307 23:16:49.651557    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:16:49.652760    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:16:49.654067    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:16:49.655104    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:49.664170    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 23:16:49.664170    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:51.707627    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:51.714025    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m02 ).networkadapters[0]).ipaddresses[0]
	I0307 23:16:54.128858    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:54.128858    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:54.129170    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:54.171510    6816 main.go:141] libmachine: [stdout =====>] : 172.20.50.199
	
	I0307 23:16:54.171576    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:16:54.171914    6816 sshutil.go:53] new ssh client: &{IP:172.20.50.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m02\id_rsa Username:docker}
	I0307 23:16:54.233181    6816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.568865s)
	W0307 23:16:54.233181    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:16:54.244235    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:16:54.348326    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6931783s)
	I0307 23:16:54.348326    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:16:54.348326    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:16:54.348326    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:16:54.393187    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:16:54.421408    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:16:54.438580    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:16:54.447497    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:16:54.477460    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:16:54.504946    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:16:54.533535    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:16:54.562610    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:16:54.591393    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:16:54.623689    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:16:54.653057    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:16:54.682269    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:54.858072    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:16:54.889517    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:16:54.901711    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:16:54.937325    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:16:54.967607    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:16:55.007178    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:16:55.040057    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:16:55.075379    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:16:55.136539    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:16:55.156124    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:16:55.198294    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:16:55.215697    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:16:55.231582    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:16:55.274264    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:16:55.453497    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:16:55.633386    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:16:55.633557    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:16:55.674647    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:55.866144    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:16:57.383151    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5169921s)
	I0307 23:16:57.397670    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:16:57.431508    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:16:57.464067    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:16:57.659537    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:16:57.843349    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:58.034056    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:16:58.072592    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:16:58.104418    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:16:58.285231    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:16:58.376721    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:16:58.387574    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:16:58.395907    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:16:58.407297    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:16:58.423671    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:16:58.488539    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:16:58.498215    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:16:58.537695    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:16:58.571959    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:16:58.574754    6816 out.go:177]   - env NO_PROXY=172.20.58.169
	I0307 23:16:58.577371    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:16:58.580157    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:16:58.581199    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:16:58.583543    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:16:58.583543    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:16:58.592642    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:16:58.598873    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:16:58.617532    6816 mustload.go:65] Loading cluster: ha-792400
	I0307 23:16:58.618148    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:16:58.618994    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:00.594607    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:00.595152    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:00.595152    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:17:00.595895    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.50.199
	I0307 23:17:00.595895    6816 certs.go:194] generating shared ca certs ...
	I0307 23:17:00.595895    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.596522    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:17:00.596856    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:17:00.596986    6816 certs.go:256] generating profile certs ...
	I0307 23:17:00.597873    6816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:17:00.598046    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7
	I0307 23:17:00.598126    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.50.199 172.20.63.254]
	I0307 23:17:00.709500    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 ...
	I0307 23:17:00.709500    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7: {Name:mk4dc464a636a1c1fc40a8d49a1c49b8951b5d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.711557    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7 ...
	I0307 23:17:00.711557    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7: {Name:mk38eabc37a82b7f04a1b43f06a56e71bc33b402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:17:00.711877    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6977efa7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:17:00.724637    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6977efa7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:17:00.725671    6816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:17:00.725671    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:17:00.726636    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:17:00.726636    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:17:00.726636    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:17:00.726636    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:17:00.727642    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:17:00.728765    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:17:00.729026    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:00.729271    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:17:00.729415    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:02.710509    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:02.710578    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:02.710711    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:17:05.051003    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:17:05.051003    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:05.051229    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:17:05.141664    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0307 23:17:05.149827    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0307 23:17:05.178926    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0307 23:17:05.185073    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0307 23:17:05.215054    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0307 23:17:05.222381    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0307 23:17:05.252861    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0307 23:17:05.258764    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0307 23:17:05.285866    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0307 23:17:05.292316    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0307 23:17:05.323712    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0307 23:17:05.329555    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0307 23:17:05.352747    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:17:05.397189    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:17:05.438417    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:17:05.484471    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:17:05.525379    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0307 23:17:05.569157    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 23:17:05.608748    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:17:05.649512    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:17:05.690556    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:17:05.729847    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:17:05.770325    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:17:05.809783    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0307 23:17:05.838711    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0307 23:17:05.867013    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0307 23:17:05.896975    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0307 23:17:05.926118    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0307 23:17:05.954267    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0307 23:17:05.983720    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0307 23:17:06.024558    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:17:06.048151    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:17:06.075656    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.081850    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.092642    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:17:06.110914    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:17:06.138687    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:17:06.167218    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.173674    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.182994    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:17:06.202128    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:17:06.233724    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:17:06.261771    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.267932    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.277449    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:17:06.297246    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:17:06.325969    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:17:06.331875    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:17:06.331875    6816 kubeadm.go:928] updating node {m02 172.20.50.199 8443 v1.28.4 docker true true} ...
	I0307 23:17:06.331875    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.50.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:17:06.332416    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:17:06.332416    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:17:06.342787    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:17:06.357831    6816 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0307 23:17:06.369808    6816 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0307 23:17:06.387661    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0307 23:17:06.387820    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0307 23:17:06.387820    6816 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0307 23:17:07.501992    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:17:07.511846    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:17:07.519798    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0307 23:17:07.519798    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0307 23:17:11.485396    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:17:11.495542    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:17:11.503127    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0307 23:17:11.503247    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0307 23:17:14.882122    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:17:14.905074    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:17:14.916375    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:17:14.923364    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0307 23:17:14.923504    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0307 23:17:15.718085    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0307 23:17:15.734070    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0307 23:17:15.763051    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:17:15.791514    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:17:15.831045    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:17:15.836141    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:17:15.866883    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:17:16.065483    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:17:16.090409    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:17:16.091639    6816 start.go:316] joinCluster: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:17:16.091757    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 23:17:16.091757    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:17:18.070104    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:17:18.070468    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:18.070468    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:17:20.366651    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:17:20.366651    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:17:20.367193    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:17:20.753158    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6613569s)
	I0307 23:17:20.753158    6816 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:17:20.753158    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0nf1yh.8o5o4jhgw43h1vbc --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m02 --control-plane --apiserver-advertise-address=172.20.50.199 --apiserver-bind-port=8443"
	I0307 23:18:17.416732    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0nf1yh.8o5o4jhgw43h1vbc --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m02 --control-plane --apiserver-advertise-address=172.20.50.199 --apiserver-bind-port=8443": (56.6630422s)
	I0307 23:18:17.416732    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 23:18:18.072127    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400-m02 minikube.k8s.io/updated_at=2024_03_07T23_18_18_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=false
	I0307 23:18:18.244060    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0307 23:18:18.403934    6816 start.go:318] duration metric: took 1m2.3118557s to joinCluster
	I0307 23:18:18.404299    6816 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:18:18.407534    6816 out.go:177] * Verifying Kubernetes components...
	I0307 23:18:18.405170    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:18:18.424825    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:18:18.759893    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:18:18.791415    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:18:18.792370    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0307 23:18:18.792520    6816 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.58.169:8443
	I0307 23:18:18.793079    6816 node_ready.go:35] waiting up to 6m0s for node "ha-792400-m02" to be "Ready" ...
	I0307 23:18:18.793079    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:18.793079    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:18.793079    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:18.793079    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:18.812144    6816 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0307 23:18:19.300448    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:19.300448    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:19.300448    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:19.300448    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:19.308543    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:18:19.805349    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:19.805349    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:19.805683    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:19.805683    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:19.809974    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:20.299324    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:20.299324    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:20.299324    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:20.299324    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:20.305477    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:20.809465    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:20.810462    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:20.810462    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:20.810462    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:20.839749    6816 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0307 23:18:20.840296    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:21.299083    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:21.299158    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:21.299158    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:21.299201    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:21.306197    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:21.806807    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:21.806885    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:21.806885    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:21.806933    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:21.811286    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:22.298247    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:22.298247    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:22.298247    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:22.298247    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:22.302638    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:22.805523    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:22.805751    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:22.805751    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:22.805751    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:22.811721    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:23.298381    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:23.298611    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:23.298611    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:23.298611    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:23.303503    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:23.304364    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:23.807388    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:23.807417    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:23.807417    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:23.807417    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:23.811917    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:24.297579    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:24.297579    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:24.297579    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:24.297579    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:24.305078    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:24.804203    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:24.804203    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:24.804203    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:24.804203    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:24.962053    6816 round_trippers.go:574] Response Status: 200 OK in 157 milliseconds
	I0307 23:18:25.294110    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:25.294205    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:25.294205    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:25.294205    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:25.298518    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:25.798497    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:25.798563    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:25.798563    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:25.798563    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:25.803508    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:25.803807    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:26.299244    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:26.299400    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:26.299400    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:26.299400    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:26.305210    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:26.804632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:26.804719    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:26.804719    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:26.804719    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:26.809418    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:27.293556    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:27.293556    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:27.293556    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:27.293556    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:27.299138    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:27.799716    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:27.800026    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:27.800026    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:27.800026    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:27.804309    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:27.804309    6816 node_ready.go:53] node "ha-792400-m02" has status "Ready":"False"
	I0307 23:18:28.306790    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.306859    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.306859    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.306859    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.311110    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:28.809393    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.809393    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.809393    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.809393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.815222    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.816229    6816 node_ready.go:49] node "ha-792400-m02" has status "Ready":"True"
	I0307 23:18:28.816302    6816 node_ready.go:38] duration metric: took 10.0230559s for node "ha-792400-m02" to be "Ready" ...
	I0307 23:18:28.816302    6816 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:18:28.816464    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:28.816464    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.816464    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.816464    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.826257    6816 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 23:18:28.835651    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.835651    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-28rtr
	I0307 23:18:28.835651    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.835651    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.835651    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.840206    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:28.840500    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.840500    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.840500    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.840500    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.847020    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:28.847197    6816 pod_ready.go:92] pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.847197    6816 pod_ready.go:81] duration metric: took 11.5461ms for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.847197    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.847750    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rx9dg
	I0307 23:18:28.847750    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.847750    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.847750    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.854481    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:28.855198    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.855198    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.855198    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.855198    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.858552    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:18:28.859485    6816 pod_ready.go:92] pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.859485    6816 pod_ready.go:81] duration metric: took 12.2877ms for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.859485    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.859485    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400
	I0307 23:18:28.859485    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.859485    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.859485    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.864801    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.865971    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:28.866000    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.866000    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.866038    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.871041    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:28.871667    6816 pod_ready.go:92] pod "etcd-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.871667    6816 pod_ready.go:81] duration metric: took 12.1818ms for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.871667    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.871667    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m02
	I0307 23:18:28.871667    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.872243    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.872243    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.880060    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:28.880880    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:28.880880    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:28.880880    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:28.880880    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:28.895937    6816 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0307 23:18:28.895937    6816 pod_ready.go:92] pod "etcd-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:28.895937    6816 pod_ready.go:81] duration metric: took 24.2699ms for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:28.895937    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.014093    6816 request.go:629] Waited for 118.1545ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:18:29.014369    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:18:29.014471    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.014471    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.014471    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.020691    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:29.218543    6816 request.go:629] Waited for 196.9149ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:29.218634    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:29.218634    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.218714    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.218714    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.223468    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:29.224513    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:29.224513    6816 pod_ready.go:81] duration metric: took 328.5728ms for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.224513    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.421312    6816 request.go:629] Waited for 196.5754ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:18:29.421545    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:18:29.421610    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.421610    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.421610    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.427154    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:29.623335    6816 request.go:629] Waited for 194.9154ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:29.623434    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:29.623434    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.623434    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.623434    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.626389    6816 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0307 23:18:29.628293    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:29.628293    6816 pod_ready.go:81] duration metric: took 403.7763ms for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.628293    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:29.812274    6816 request.go:629] Waited for 183.6964ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:18:29.812417    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:18:29.812456    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:29.812456    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:29.812456    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:29.817273    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.018485    6816 request.go:629] Waited for 199.9866ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:30.018627    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:30.018627    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.018627    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.018627    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.026078    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:30.027375    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.027466    6816 pod_ready.go:81] duration metric: took 399.1691ms for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.027466    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.222541    6816 request.go:629] Waited for 194.5931ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:18:30.222628    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:18:30.222628    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.222628    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.222628    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.227421    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.410818    6816 request.go:629] Waited for 181.4497ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.410867    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.410867    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.410867    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.410867    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.415499    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.417054    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.417054    6816 pod_ready.go:81] duration metric: took 389.5842ms for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.417054    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.614134    6816 request.go:629] Waited for 196.8574ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:18:30.614134    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:18:30.614134    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.614134    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.614393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.618431    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.815276    6816 request.go:629] Waited for 194.2966ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.815276    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:30.815276    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:30.815540    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:30.815659    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:30.820239    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:30.821389    6816 pod_ready.go:92] pod "kube-proxy-j6wd5" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:30.821389    6816 pod_ready.go:81] duration metric: took 404.3317ms for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:30.821389    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.016458    6816 request.go:629] Waited for 194.8769ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:18:31.016458    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:18:31.016458    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.016458    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.016822    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.022644    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:31.219281    6816 request.go:629] Waited for 195.6742ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.219372    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.219585    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.219585    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.219585    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.224391    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:31.224987    6816 pod_ready.go:92] pod "kube-proxy-zxmcc" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:31.224987    6816 pod_ready.go:81] duration metric: took 403.4887ms for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.224987    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.421342    6816 request.go:629] Waited for 196.3531ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:18:31.421342    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:18:31.421342    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.421342    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.421342    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.426405    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:18:31.611193    6816 request.go:629] Waited for 183.7264ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.611283    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:18:31.611283    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.611495    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.611495    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.618314    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:31.618797    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:31.618797    6816 pod_ready.go:81] duration metric: took 393.8062ms for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.619395    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:31.813892    6816 request.go:629] Waited for 194.2507ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:18:31.813982    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:18:31.813982    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:31.813982    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:31.814136    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:31.820851    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:18:32.017806    6816 request.go:629] Waited for 196.009ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:32.017966    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:18:32.017966    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.017966    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.017966    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.022199    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.023311    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:18:32.023402    6816 pod_ready.go:81] duration metric: took 403.9719ms for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:18:32.023402    6816 pod_ready.go:38] duration metric: took 3.2070692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:18:32.023402    6816 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:18:32.035197    6816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:18:32.062836    6816 api_server.go:72] duration metric: took 13.6583094s to wait for apiserver process to appear ...
	I0307 23:18:32.062836    6816 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:18:32.062922    6816 api_server.go:253] Checking apiserver healthz at https://172.20.58.169:8443/healthz ...
	I0307 23:18:32.070969    6816 api_server.go:279] https://172.20.58.169:8443/healthz returned 200:
	ok
	I0307 23:18:32.071308    6816 round_trippers.go:463] GET https://172.20.58.169:8443/version
	I0307 23:18:32.071308    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.071308    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.071308    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.073096    6816 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:18:32.073732    6816 api_server.go:141] control plane version: v1.28.4
	I0307 23:18:32.073782    6816 api_server.go:131] duration metric: took 10.8595ms to wait for apiserver health ...
	I0307 23:18:32.073850    6816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:18:32.220678    6816 request.go:629] Waited for 146.7393ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.220883    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.220883    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.220883    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.220883    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.228173    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:32.233605    6816 system_pods.go:59] 17 kube-system pods found
	I0307 23:18:32.234185    6816 system_pods.go:61] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:18:32.234185    6816 system_pods.go:61] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:18:32.234348    6816 system_pods.go:61] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:18:32.234348    6816 system_pods.go:74] duration metric: took 160.4484ms to wait for pod list to return data ...
	I0307 23:18:32.234348    6816 default_sa.go:34] waiting for default service account to be created ...
	I0307 23:18:32.424721    6816 request.go:629] Waited for 190.128ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:18:32.424721    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:18:32.424721    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.424721    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.424721    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.429359    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.429800    6816 default_sa.go:45] found service account: "default"
	I0307 23:18:32.429899    6816 default_sa.go:55] duration metric: took 195.4502ms for default service account to be created ...
	I0307 23:18:32.429899    6816 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 23:18:32.612348    6816 request.go:629] Waited for 182.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.612416    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:18:32.612416    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.612553    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.612608    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.620007    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:18:32.626443    6816 system_pods.go:86] 17 kube-system pods found
	I0307 23:18:32.626443    6816 system_pods.go:89] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:18:32.626443    6816 system_pods.go:89] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:18:32.626443    6816 system_pods.go:126] duration metric: took 196.5429ms to wait for k8s-apps to be running ...
	I0307 23:18:32.626443    6816 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 23:18:32.636205    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:18:32.660703    6816 system_svc.go:56] duration metric: took 34.2594ms WaitForService to wait for kubelet
	I0307 23:18:32.660703    6816 kubeadm.go:576] duration metric: took 14.2561706s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:18:32.660826    6816 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:18:32.816160    6816 request.go:629] Waited for 155.2814ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes
	I0307 23:18:32.816433    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes
	I0307 23:18:32.816433    6816 round_trippers.go:469] Request Headers:
	I0307 23:18:32.816515    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:18:32.816534    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:18:32.821312    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:18:32.822345    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:18:32.822345    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:18:32.822345    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:18:32.822345    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:18:32.822345    6816 node_conditions.go:105] duration metric: took 161.5169ms to run NodePressure ...
	I0307 23:18:32.822345    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:18:32.822345    6816 start.go:254] writing updated cluster config ...
	I0307 23:18:32.828095    6816 out.go:177] 
	I0307 23:18:32.838120    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:18:32.838120    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:18:32.845166    6816 out.go:177] * Starting "ha-792400-m03" control-plane node in "ha-792400" cluster
	I0307 23:18:32.847373    6816 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 23:18:32.847373    6816 cache.go:56] Caching tarball of preloaded images
	I0307 23:18:32.847892    6816 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0307 23:18:32.848072    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0307 23:18:32.848316    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:18:32.855116    6816 start.go:360] acquireMachinesLock for ha-792400-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0307 23:18:32.855116    6816 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-792400-m03"
	I0307 23:18:32.855116    6816 start.go:93] Provisioning new machine with config: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:18:32.856034    6816 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0307 23:18:32.860028    6816 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0307 23:18:32.861048    6816 start.go:159] libmachine.API.Create for "ha-792400" (driver="hyperv")
	I0307 23:18:32.861048    6816 client.go:168] LocalClient.Create starting
	I0307 23:18:32.861048    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0307 23:18:32.861048    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Decoding PEM data...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: Parsing certificate...
	I0307 23:18:32.862049    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0307 23:18:34.674427    6816 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0307 23:18:34.675322    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:34.675392    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0307 23:18:36.332405    6816 main.go:141] libmachine: [stdout =====>] : False
	
	I0307 23:18:36.332405    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:36.332524    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:18:37.745919    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:18:37.746187    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:37.746187    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:18:41.213730    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:18:41.213730    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:41.215881    6816 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0307 23:18:41.691941    6816 main.go:141] libmachine: Creating SSH key...
	I0307 23:18:41.918056    6816 main.go:141] libmachine: Creating VM...
	I0307 23:18:41.918056    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0307 23:18:44.648200    6816 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0307 23:18:44.649036    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:44.649036    6816 main.go:141] libmachine: Using switch "Default Switch"
	I0307 23:18:44.649166    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0307 23:18:46.332251    6816 main.go:141] libmachine: [stdout =====>] : True
	
	I0307 23:18:46.332251    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:46.333265    6816 main.go:141] libmachine: Creating VHD
	I0307 23:18:46.333314    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0307 23:18:49.891571    6816 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 984A58C8-77D7-44BA-AC0B-7F6204C11272
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0307 23:18:49.892312    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:49.892365    6816 main.go:141] libmachine: Writing magic tar header
	I0307 23:18:49.892365    6816 main.go:141] libmachine: Writing SSH key tar header
	I0307 23:18:49.901638    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:52.973905    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd' -SizeBytes 20000MB
	I0307 23:18:55.420715    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:18:55.420853    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:55.420853    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0307 23:18:58.862541    6816 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-792400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0307 23:18:58.863445    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:18:58.863445    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-792400-m03 -DynamicMemoryEnabled $false
	I0307 23:19:00.997697    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:00.997697    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:00.998370    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-792400-m03 -Count 2
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:03.069479    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\boot2docker.iso'
	I0307 23:19:05.522516    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:05.522516    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:05.522792    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-792400-m03 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\disk.vhd'
	I0307 23:19:08.003610    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:08.004205    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:08.004205    6816 main.go:141] libmachine: Starting VM...
	I0307 23:19:08.004205    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-792400-m03
	I0307 23:19:10.892429    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:10.892500    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:10.892500    6816 main.go:141] libmachine: Waiting for host to start...
	I0307 23:19:10.892500    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:13.060020    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:13.060020    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:13.061007    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:15.442274    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:15.442274    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:16.456925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:18.543788    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:18.544171    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:18.544243    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:20.916663    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:20.916663    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:21.928270    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:24.023630    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:24.023630    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:24.023852    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:26.396845    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:26.396845    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:27.405150    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:29.508296    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:29.508296    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:29.509031    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:31.880174    6816 main.go:141] libmachine: [stdout =====>] : 
	I0307 23:19:31.880527    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:32.889719    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:35.016152    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:35.016401    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:35.016401    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:37.423741    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:37.423741    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:37.424261    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:39.423918    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:39.424153    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:39.424153    6816 machine.go:94] provisionDockerMachine start ...
	I0307 23:19:39.424278    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:41.461862    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:43.894490    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:43.894490    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:43.899764    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:43.899906    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:43.899906    6816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 23:19:44.016925    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0307 23:19:44.016925    6816 buildroot.go:166] provisioning hostname "ha-792400-m03"
	I0307 23:19:44.016925    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:46.030349    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:46.030349    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:46.031073    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:48.458139    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:48.458139    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:48.463374    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:48.463872    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:48.463872    6816 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-792400-m03 && echo "ha-792400-m03" | sudo tee /etc/hostname
	I0307 23:19:48.610866    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-792400-m03
	
	I0307 23:19:48.610980    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:50.643348    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:50.644204    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:50.644265    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:53.045406    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:53.045554    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:53.050577    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:19:53.050745    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:19:53.050745    6816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-792400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-792400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-792400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 23:19:53.182421    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 23:19:53.182421    6816 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0307 23:19:53.182421    6816 buildroot.go:174] setting up certificates
	I0307 23:19:53.182421    6816 provision.go:84] configureAuth start
	I0307 23:19:53.182421    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:55.202100    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:55.202100    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:55.202351    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:57.592949    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:19:59.629670    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:02.046894    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:02.046894    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:02.046894    6816 provision.go:143] copyHostCerts
	I0307 23:20:02.046894    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0307 23:20:02.046894    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0307 23:20:02.046894    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0307 23:20:02.047548    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0307 23:20:02.049370    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0307 23:20:02.049487    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0307 23:20:02.049487    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0307 23:20:02.049487    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0307 23:20:02.050730    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0307 23:20:02.051012    6816 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0307 23:20:02.051040    6816 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0307 23:20:02.051385    6816 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0307 23:20:02.051904    6816 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-792400-m03 san=[127.0.0.1 172.20.59.36 ha-792400-m03 localhost minikube]
	I0307 23:20:02.191375    6816 provision.go:177] copyRemoteCerts
	I0307 23:20:02.203349    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 23:20:02.203349    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:04.234290    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:06.623182    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:06.623636    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:06.623636    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:06.732081    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.528629s)
	I0307 23:20:06.732081    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0307 23:20:06.732081    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 23:20:06.778770    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0307 23:20:06.778839    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 23:20:06.823400    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0307 23:20:06.823739    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 23:20:06.869051    6816 provision.go:87] duration metric: took 13.6864993s to configureAuth
	I0307 23:20:06.869123    6816 buildroot.go:189] setting minikube options for container-runtime
	I0307 23:20:06.869727    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:20:06.869823    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:08.867202    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:08.867202    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:08.867526    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:11.258599    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:11.259041    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:11.264316    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:11.264316    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:11.264316    6816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0307 23:20:11.386983    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0307 23:20:11.386983    6816 buildroot.go:70] root file system type: tmpfs
	I0307 23:20:11.387899    6816 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0307 23:20:11.387899    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:13.393436    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:15.798991    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:15.798991    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:15.804603    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:15.804603    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:15.804603    6816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.58.169"
	Environment="NO_PROXY=172.20.58.169,172.20.50.199"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0307 23:20:15.943601    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.58.169
	Environment=NO_PROXY=172.20.58.169,172.20.50.199
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0307 23:20:15.943712    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:18.001151    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:18.001762    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:18.001762    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:20.445742    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:20.445878    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:20.450689    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:20.451437    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:20.451437    6816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0307 23:20:21.619025    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0307 23:20:21.619025    6816 machine.go:97] duration metric: took 42.1944716s to provisionDockerMachine
	I0307 23:20:21.619025    6816 client.go:171] duration metric: took 1m48.7569484s to LocalClient.Create
	I0307 23:20:21.619025    6816 start.go:167] duration metric: took 1m48.7569484s to libmachine.API.Create "ha-792400"
	I0307 23:20:21.619025    6816 start.go:293] postStartSetup for "ha-792400-m03" (driver="hyperv")
	I0307 23:20:21.619025    6816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 23:20:21.630707    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 23:20:21.630707    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:23.629603    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:26.030278    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:26.030278    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:26.030278    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:26.136844    6816 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5060952s)
	I0307 23:20:26.147621    6816 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 23:20:26.154917    6816 info.go:137] Remote host: Buildroot 2023.02.9
	I0307 23:20:26.154962    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0307 23:20:26.155138    6816 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0307 23:20:26.155988    6816 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0307 23:20:26.155988    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0307 23:20:26.167573    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 23:20:26.186576    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0307 23:20:26.231978    6816 start.go:296] duration metric: took 4.6129093s for postStartSetup
	I0307 23:20:26.234775    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:28.229897    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:28.229897    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:28.230816    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:30.614500    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:30.614500    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:30.614990    6816 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\config.json ...
	I0307 23:20:30.618284    6816 start.go:128] duration metric: took 1m57.761136s to createHost
	I0307 23:20:30.618445    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:32.606765    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:32.606884    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:32.606884    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:35.014723    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:35.014876    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:35.020837    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:35.020837    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:35.021382    6816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0307 23:20:35.146460    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709853635.156036719
	
	I0307 23:20:35.146558    6816 fix.go:216] guest clock: 1709853635.156036719
	I0307 23:20:35.146558    6816 fix.go:229] Guest: 2024-03-07 23:20:35.156036719 +0000 UTC Remote: 2024-03-07 23:20:30.618348 +0000 UTC m=+532.325941501 (delta=4.537688719s)
	I0307 23:20:35.146642    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:37.145169    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:39.544643    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:39.544643    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:39.550178    6816 main.go:141] libmachine: Using SSH client type: native
	I0307 23:20:39.550852    6816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.36 22 <nil> <nil>}
	I0307 23:20:39.550852    6816 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709853635
	I0307 23:20:39.688579    6816 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar  7 23:20:35 UTC 2024
	
	I0307 23:20:39.688579    6816 fix.go:236] clock set: Thu Mar  7 23:20:35 UTC 2024
	 (err=<nil>)
	I0307 23:20:39.688579    6816 start.go:83] releasing machines lock for "ha-792400-m03", held for 2m6.8322641s
	I0307 23:20:39.688579    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:41.689437    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:44.075943    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:44.075943    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:44.078644    6816 out.go:177] * Found network options:
	I0307 23:20:44.082240    6816 out.go:177]   - NO_PROXY=172.20.58.169,172.20.50.199
	W0307 23:20:44.086486    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.086486    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:20:44.088999    6816 out.go:177]   - NO_PROXY=172.20.58.169,172.20.50.199
	W0307 23:20:44.091327    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.091327    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.092871    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	W0307 23:20:44.092871    6816 proxy.go:119] fail to check proxy env: Error ip not in block
	I0307 23:20:44.095206    6816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 23:20:44.095206    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:44.107175    6816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 23:20:44.107175    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:46.161154    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:46.172867    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:48.719280    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:48.719348    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:48.719348    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:48.728937    6816 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:20:48.728937    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:48.728937    6816 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:20:48.807797    6816 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.7005777s)
	W0307 23:20:48.807797    6816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0307 23:20:48.818530    6816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 23:20:48.873904    6816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0307 23:20:48.873904    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:20:48.873904    6816 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7786528s)
	I0307 23:20:48.874581    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:20:48.920852    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 23:20:48.951906    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 23:20:48.969858    6816 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 23:20:48.979894    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 23:20:49.006851    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:20:49.039295    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 23:20:49.067931    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 23:20:49.097279    6816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 23:20:49.131940    6816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 23:20:49.162577    6816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 23:20:49.189156    6816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 23:20:49.217674    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:49.409887    6816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 23:20:49.440919    6816 start.go:494] detecting cgroup driver to use...
	I0307 23:20:49.453189    6816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0307 23:20:49.493154    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:20:49.525753    6816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0307 23:20:49.571555    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0307 23:20:49.605106    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:20:49.640102    6816 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 23:20:49.705340    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 23:20:49.727183    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 23:20:49.771613    6816 ssh_runner.go:195] Run: which cri-dockerd
	I0307 23:20:49.789015    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0307 23:20:49.808561    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0307 23:20:49.849146    6816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0307 23:20:50.038104    6816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0307 23:20:50.210044    6816 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0307 23:20:50.210044    6816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0307 23:20:50.254946    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:50.446876    6816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0307 23:20:51.983412    6816 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5364127s)
	I0307 23:20:51.995408    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0307 23:20:52.029079    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:20:52.062863    6816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0307 23:20:52.257988    6816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0307 23:20:52.450714    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:52.643497    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0307 23:20:52.683067    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0307 23:20:52.716509    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:20:52.901323    6816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0307 23:20:52.998724    6816 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0307 23:20:53.010772    6816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0307 23:20:53.019238    6816 start.go:562] Will wait 60s for crictl version
	I0307 23:20:53.029900    6816 ssh_runner.go:195] Run: which crictl
	I0307 23:20:53.047805    6816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 23:20:53.116723    6816 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0307 23:20:53.128600    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:20:53.177042    6816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0307 23:20:53.209905    6816 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0307 23:20:53.212815    6816 out.go:177]   - env NO_PROXY=172.20.58.169
	I0307 23:20:53.215363    6816 out.go:177]   - env NO_PROXY=172.20.58.169,172.20.50.199
	I0307 23:20:53.217125    6816 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0307 23:20:53.221876    6816 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0307 23:20:53.221904    6816 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0307 23:20:53.221904    6816 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0307 23:20:53.221964    6816 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0307 23:20:53.224739    6816 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0307 23:20:53.224739    6816 ip.go:210] interface addr: 172.20.48.1/20
	I0307 23:20:53.236196    6816 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0307 23:20:53.241292    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:20:53.262854    6816 mustload.go:65] Loading cluster: ha-792400
	I0307 23:20:53.263524    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:20:53.264239    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:20:55.263553    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:55.263553    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:55.263553    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:20:55.264334    6816 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400 for IP: 172.20.59.36
	I0307 23:20:55.264334    6816 certs.go:194] generating shared ca certs ...
	I0307 23:20:55.264334    6816 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.264899    6816 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0307 23:20:55.265581    6816 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0307 23:20:55.265738    6816 certs.go:256] generating profile certs ...
	I0307 23:20:55.266378    6816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\client.key
	I0307 23:20:55.266650    6816 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4
	I0307 23:20:55.266755    6816 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.58.169 172.20.50.199 172.20.59.36 172.20.63.254]
	I0307 23:20:55.424258    6816 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 ...
	I0307 23:20:55.424258    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4: {Name:mk2d7123acb961ebc703db74541faae0d436c001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.426195    6816 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4 ...
	I0307 23:20:55.426195    6816 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4: {Name:mkdaf51f147289c85301dcf4dc53946c27cee5d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 23:20:55.426195    6816 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt.6e7a70c4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt
	I0307 23:20:55.439337    6816 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key.6e7a70c4 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key
	I0307 23:20:55.441610    6816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key
	I0307 23:20:55.442174    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0307 23:20:55.442483    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0307 23:20:55.443011    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0307 23:20:55.443140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0307 23:20:55.443140    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0307 23:20:55.443902    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0307 23:20:55.443902    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0307 23:20:55.444618    6816 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0307 23:20:55.444618    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0307 23:20:55.444618    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0307 23:20:55.445359    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0307 23:20:55.445359    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0307 23:20:55.445952    6816 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0307 23:20:55.445952    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0307 23:20:55.446663    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:20:57.440346    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:20:57.440647    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:57.440743    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:20:59.871074    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:20:59.871145    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:20:59.871145    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:20:59.969836    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0307 23:20:59.977471    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0307 23:21:00.010894    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0307 23:21:00.018597    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0307 23:21:00.048674    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0307 23:21:00.056675    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0307 23:21:00.086664    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0307 23:21:00.093706    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0307 23:21:00.125902    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0307 23:21:00.131441    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0307 23:21:00.158033    6816 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0307 23:21:00.164733    6816 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0307 23:21:00.183114    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 23:21:00.230565    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 23:21:00.272600    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 23:21:00.314739    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 23:21:00.359073    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0307 23:21:00.401038    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 23:21:00.442713    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 23:21:00.485162    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ha-792400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 23:21:00.528914    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 23:21:00.569153    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0307 23:21:00.615285    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0307 23:21:00.659190    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0307 23:21:00.689317    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0307 23:21:00.721226    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0307 23:21:00.753397    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0307 23:21:00.784124    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0307 23:21:00.815067    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0307 23:21:00.845478    6816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0307 23:21:00.883965    6816 ssh_runner.go:195] Run: openssl version
	I0307 23:21:00.904106    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 23:21:00.933299    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.939760    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.952207    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 23:21:00.972055    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 23:21:01.001634    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0307 23:21:01.030725    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.039489    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.051492    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0307 23:21:01.071519    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0307 23:21:01.100256    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0307 23:21:01.132018    6816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.138629    6816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.150998    6816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0307 23:21:01.170130    6816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 23:21:01.199638    6816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 23:21:01.205585    6816 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 23:21:01.205585    6816 kubeadm.go:928] updating node {m03 172.20.59.36 8443 v1.28.4 docker true true} ...
	I0307 23:21:01.205585    6816 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-792400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 23:21:01.206117    6816 kube-vip.go:101] generating kube-vip config ...
	I0307 23:21:01.206264    6816 kube-vip.go:121] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.63.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0307 23:21:01.216793    6816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 23:21:01.233719    6816 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0307 23:21:01.244916    6816 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0307 23:21:01.262567    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0307 23:21:01.262699    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0307 23:21:01.262787    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:21:01.262623    6816 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0307 23:21:01.263080    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:21:01.275943    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:21:01.277467    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0307 23:21:01.278069    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0307 23:21:01.297786    6816 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:21:01.297786    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0307 23:21:01.297786    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0307 23:21:01.297786    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0307 23:21:01.297786    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0307 23:21:01.308806    6816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0307 23:21:01.354304    6816 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0307 23:21:01.354557    6816 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0307 23:21:02.633277    6816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0307 23:21:02.650757    6816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0307 23:21:02.681564    6816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 23:21:02.715680    6816 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1262 bytes)
	I0307 23:21:02.761892    6816 ssh_runner.go:195] Run: grep 172.20.63.254	control-plane.minikube.internal$ /etc/hosts
	I0307 23:21:02.767722    6816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.63.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 23:21:02.799183    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:21:03.003240    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:21:03.034625    6816 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:21:03.035310    6816 start.go:316] joinCluster: &{Name:ha-792400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-792400 Namespace:default APIServerHAVIP:172.20.63.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.58.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 23:21:03.035310    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0307 23:21:03.035310    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:21:05.067060    6816 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:21:07.477710    6816 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:21:07.477710    6816 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:21:07.477710    6816 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:21:07.668386    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.6329003s)
	I0307 23:21:07.668386    6816 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:21:07.668386    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cec46d.ea12q4hw7balg83q --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m03 --control-plane --apiserver-advertise-address=172.20.59.36 --apiserver-bind-port=8443"
	I0307 23:21:50.250862    6816 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cec46d.ea12q4hw7balg83q --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-792400-m03 --control-plane --apiserver-advertise-address=172.20.59.36 --apiserver-bind-port=8443": (42.5820801s)
	I0307 23:21:50.250992    6816 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0307 23:21:51.048382    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-792400-m03 minikube.k8s.io/updated_at=2024_03_07T23_21_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=ha-792400 minikube.k8s.io/primary=false
	I0307 23:21:51.232131    6816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-792400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0307 23:21:51.391221    6816 start.go:318] duration metric: took 48.3554605s to joinCluster
	I0307 23:21:51.391221    6816 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.59.36 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0307 23:21:51.396217    6816 out.go:177] * Verifying Kubernetes components...
	I0307 23:21:51.392246    6816 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:21:51.410233    6816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 23:21:51.767460    6816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 23:21:51.803438    6816 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:21:51.804240    6816 kapi.go:59] client config for ha-792400: &rest.Config{Host:"https://172.20.63.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\ha-792400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0307 23:21:51.804330    6816 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.63.254:8443 with https://172.20.58.169:8443
	I0307 23:21:51.805254    6816 node_ready.go:35] waiting up to 6m0s for node "ha-792400-m03" to be "Ready" ...
	I0307 23:21:51.805459    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:51.805509    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:51.805509    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:51.805543    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:51.821751    6816 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0307 23:21:52.309212    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:52.309395    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:52.309395    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:52.309395    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:52.314876    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:52.818244    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:52.818244    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:52.818244    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:52.818244    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:52.823900    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:53.311564    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:53.311791    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:53.311791    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:53.311862    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:53.316772    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:53.815711    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:53.815711    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:53.815711    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:53.815711    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:53.820720    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:21:53.822018    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:54.305839    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:54.305839    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:54.305839    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:54.305839    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:54.310619    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:54.812466    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:54.812672    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:54.812732    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:54.812732    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:54.817487    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:55.317067    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:55.317152    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:55.317152    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:55.317207    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:55.321571    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:55.807758    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:55.807758    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:55.807758    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:55.807758    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:55.812339    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:56.312273    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:56.312273    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:56.312273    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:56.312273    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:56.320401    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:21:56.322124    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:56.818808    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:56.818808    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:56.818808    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:56.818808    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:56.823213    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:57.308028    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:57.308271    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:57.308271    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:57.308271    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:57.808452    6816 round_trippers.go:574] Response Status: 200 OK in 500 milliseconds
	I0307 23:21:57.810012    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:57.810012    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:57.810012    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:57.810012    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:57.816294    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:21:58.310632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:58.310632    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:58.310632    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:58.310632    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:58.317366    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:21:58.820746    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:58.820746    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:58.820746    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:58.820746    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:58.825490    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:58.826836    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:21:59.306958    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:59.306958    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:59.306958    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:59.306958    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:59.312533    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:21:59.807924    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:21:59.808115    6816 round_trippers.go:469] Request Headers:
	I0307 23:21:59.808115    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:21:59.808115    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:21:59.815234    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:00.308331    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:00.308420    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:00.308420    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:00.308420    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:00.313116    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:00.810423    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:00.810631    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:00.810631    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:00.810631    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:00.815913    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:01.308477    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:01.308698    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:01.308698    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:01.308698    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:01.312832    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:01.314210    6816 node_ready.go:53] node "ha-792400-m03" has status "Ready":"False"
	I0307 23:22:01.810640    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:01.810640    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:01.810640    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:01.810640    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:01.815281    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:02.312629    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:02.312758    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:02.312758    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:02.312758    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:02.317145    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:02.817754    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:02.817754    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:02.817754    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:02.817849    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:02.823377    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.306871    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.306871    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.306871    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.306871    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.310514    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.311789    6816 node_ready.go:49] node "ha-792400-m03" has status "Ready":"True"
	I0307 23:22:03.311877    6816 node_ready.go:38] duration metric: took 11.5065144s for node "ha-792400-m03" to be "Ready" ...
	I0307 23:22:03.311877    6816 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:22:03.312054    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:03.312054    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.312054    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.312054    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.322640    6816 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0307 23:22:03.332515    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.332610    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-28rtr
	I0307 23:22:03.332672    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.332672    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.332672    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.337028    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.338111    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.338198    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.338198    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.338198    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.342311    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.342646    6816 pod_ready.go:92] pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.342646    6816 pod_ready.go:81] duration metric: took 10.1305ms for pod "coredns-5dd5756b68-28rtr" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.342646    6816 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.342646    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rx9dg
	I0307 23:22:03.342646    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.342646    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.343370    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.346417    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.348619    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.348619    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.348619    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.348619    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.352195    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.353287    6816 pod_ready.go:92] pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.353287    6816 pod_ready.go:81] duration metric: took 10.641ms for pod "coredns-5dd5756b68-rx9dg" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.353379    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.353425    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400
	I0307 23:22:03.353425    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.353425    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.353425    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.356971    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.358335    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:03.358389    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.358389    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.358389    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.362012    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.362974    6816 pod_ready.go:92] pod "etcd-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.362974    6816 pod_ready.go:81] duration metric: took 9.5943ms for pod "etcd-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.362974    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.362974    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m02
	I0307 23:22:03.362974    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.362974    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.362974    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.367184    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.368167    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:03.368167    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.368167    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.368167    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.372185    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.373547    6816 pod_ready.go:92] pod "etcd-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:03.373547    6816 pod_ready.go:81] duration metric: took 10.5739ms for pod "etcd-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.373600    6816 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:03.507402    6816 request.go:629] Waited for 133.8014ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.507655    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.507655    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.507655    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.507655    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.511236    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:03.710280    6816 request.go:629] Waited for 197.2041ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.710280    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:03.710280    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.710280    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.710280    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.714900    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:03.913869    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:03.913869    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:03.913869    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:03.913869    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:03.918257    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.120879    6816 request.go:629] Waited for 201.188ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.121267    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.121314    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.121314    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.121314    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.126967    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:04.387722    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:04.387914    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.387914    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.387914    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.397154    6816 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0307 23:22:04.514318    6816 request.go:629] Waited for 115.9844ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.514423    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.514475    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.514475    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.514475    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.518867    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.889193    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:04.889193    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.889193    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.889193    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.893588    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:04.920302    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:04.920482    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:04.920482    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:04.920482    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:04.925075    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:05.377905    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/etcd-ha-792400-m03
	I0307 23:22:05.377905    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.377905    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.377982    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.395380    6816 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0307 23:22:05.396045    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:05.396045    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.396045    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.396045    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.411649    6816 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0307 23:22:05.412613    6816 pod_ready.go:92] pod "etcd-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:05.412687    6816 pod_ready.go:81] duration metric: took 2.0390687s for pod "etcd-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.412687    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.519271    6816 request.go:629] Waited for 106.305ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:22:05.519393    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400
	I0307 23:22:05.519393    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.519393    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.519393    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.526614    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:05.707794    6816 request.go:629] Waited for 180.1528ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:05.707910    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:05.707910    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.708057    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.708057    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.714767    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:05.715418    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:05.715418    6816 pod_ready.go:81] duration metric: took 302.7279ms for pod "kube-apiserver-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.715418    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:05.910848    6816 request.go:629] Waited for 195.2327ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:22:05.910923    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m02
	I0307 23:22:05.910923    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:05.910923    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:05.911000    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:05.915376    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.114415    6816 request.go:629] Waited for 197.3001ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:06.114631    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:06.114631    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.114631    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.114631    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.119331    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.120571    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.120680    6816 pod_ready.go:81] duration metric: took 405.2583ms for pod "kube-apiserver-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.120680    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.316534    6816 request.go:629] Waited for 195.7646ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m03
	I0307 23:22:06.316677    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-792400-m03
	I0307 23:22:06.316737    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.316765    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.316765    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.321514    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.518902    6816 request.go:629] Waited for 195.6939ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:06.518978    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:06.518978    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.518978    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.518978    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.523574    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.524962    6816 pod_ready.go:92] pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.524962    6816 pod_ready.go:81] duration metric: took 404.2776ms for pod "kube-apiserver-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.525142    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.719992    6816 request.go:629] Waited for 194.728ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:22:06.720391    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400
	I0307 23:22:06.720391    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.720391    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.720463    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.725696    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:06.909199    6816 request.go:629] Waited for 181.7967ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:06.909469    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:06.909469    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:06.909469    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:06.909469    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:06.914267    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:06.915515    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:06.915582    6816 pod_ready.go:81] duration metric: took 390.4362ms for pod "kube-controller-manager-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:06.915582    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.112189    6816 request.go:629] Waited for 196.2786ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:22:07.112442    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m02
	I0307 23:22:07.112484    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.112484    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.112484    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.118275    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:07.316241    6816 request.go:629] Waited for 196.6902ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:07.316408    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:07.316474    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.316474    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.316474    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.322039    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:07.322693    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:07.322693    6816 pod_ready.go:81] duration metric: took 407.1074ms for pod "kube-controller-manager-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.322693    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.516496    6816 request.go:629] Waited for 193.8008ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m03
	I0307 23:22:07.516587    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-792400-m03
	I0307 23:22:07.516587    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.516587    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.516587    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.524279    6816 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0307 23:22:07.719887    6816 request.go:629] Waited for 194.2236ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:07.720094    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:07.720094    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.720196    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.720196    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.726816    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:07.727745    6816 pod_ready.go:92] pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:07.727830    6816 pod_ready.go:81] duration metric: took 405.1325ms for pod "kube-controller-manager-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.727867    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2rxpp" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:07.909979    6816 request.go:629] Waited for 182.1101ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2rxpp
	I0307 23:22:07.909979    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2rxpp
	I0307 23:22:07.909979    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:07.909979    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:07.909979    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:07.918860    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:22:08.116254    6816 request.go:629] Waited for 195.4808ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:08.116436    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:08.116535    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.116535    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.116535    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.121668    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.122464    6816 pod_ready.go:92] pod "kube-proxy-2rxpp" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.122531    6816 pod_ready.go:81] duration metric: took 394.6603ms for pod "kube-proxy-2rxpp" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.122531    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.317811    6816 request.go:629] Waited for 195.0091ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:22:08.318045    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6wd5
	I0307 23:22:08.318122    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.318174    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.318194    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.323124    6816 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0307 23:22:08.522082    6816 request.go:629] Waited for 198.5341ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:08.522082    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:08.522082    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.522082    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.522082    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.526487    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.528075    6816 pod_ready.go:92] pod "kube-proxy-j6wd5" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.528075    6816 pod_ready.go:81] duration metric: took 405.54ms for pod "kube-proxy-j6wd5" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.528075    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.707970    6816 request.go:629] Waited for 179.8935ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:22:08.708379    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxmcc
	I0307 23:22:08.708379    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.708379    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.708379    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.712682    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:08.911145    6816 request.go:629] Waited for 196.7134ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:08.911304    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:08.911304    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:08.911304    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:08.911304    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:08.916642    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:08.917489    6816 pod_ready.go:92] pod "kube-proxy-zxmcc" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:08.917489    6816 pod_ready.go:81] duration metric: took 389.4108ms for pod "kube-proxy-zxmcc" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:08.917489    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.115217    6816 request.go:629] Waited for 197.7258ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:22:09.115772    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400
	I0307 23:22:09.115772    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.115772    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.115772    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.121155    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:09.318165    6816 request.go:629] Waited for 195.3691ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:09.318414    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400
	I0307 23:22:09.318414    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.318414    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.318414    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.323815    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:09.324896    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:09.324974    6816 pod_ready.go:81] duration metric: took 407.481ms for pod "kube-scheduler-ha-792400" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.324974    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.520557    6816 request.go:629] Waited for 195.3123ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:22:09.520690    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m02
	I0307 23:22:09.520690    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.520690    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.520690    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.524864    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:09.710530    6816 request.go:629] Waited for 183.349ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:09.710626    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m02
	I0307 23:22:09.710626    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.710709    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.710709    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.715032    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:09.716348    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:09.716468    6816 pod_ready.go:81] duration metric: took 391.4897ms for pod "kube-scheduler-ha-792400-m02" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.716468    6816 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:09.914528    6816 request.go:629] Waited for 197.6487ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m03
	I0307 23:22:09.914715    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-792400-m03
	I0307 23:22:09.914715    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:09.914715    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:09.914715    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:09.920047    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:10.119608    6816 request.go:629] Waited for 198.4471ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:10.119790    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes/ha-792400-m03
	I0307 23:22:10.119851    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.119917    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.119976    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.124489    6816 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0307 23:22:10.125544    6816 pod_ready.go:92] pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace has status "Ready":"True"
	I0307 23:22:10.125647    6816 pod_ready.go:81] duration metric: took 409.1209ms for pod "kube-scheduler-ha-792400-m03" in "kube-system" namespace to be "Ready" ...
	I0307 23:22:10.125647    6816 pod_ready.go:38] duration metric: took 6.8136182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 23:22:10.125647    6816 api_server.go:52] waiting for apiserver process to appear ...
	I0307 23:22:10.137842    6816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:22:10.165264    6816 api_server.go:72] duration metric: took 18.7737861s to wait for apiserver process to appear ...
	I0307 23:22:10.165264    6816 api_server.go:88] waiting for apiserver healthz status ...
	I0307 23:22:10.165264    6816 api_server.go:253] Checking apiserver healthz at https://172.20.58.169:8443/healthz ...
	I0307 23:22:10.172335    6816 api_server.go:279] https://172.20.58.169:8443/healthz returned 200:
	ok
	I0307 23:22:10.173021    6816 round_trippers.go:463] GET https://172.20.58.169:8443/version
	I0307 23:22:10.173021    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.173021    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.173021    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.174322    6816 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0307 23:22:10.174322    6816 api_server.go:141] control plane version: v1.28.4
	I0307 23:22:10.174322    6816 api_server.go:131] duration metric: took 9.0577ms to wait for apiserver health ...
	I0307 23:22:10.174322    6816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 23:22:10.321628    6816 request.go:629] Waited for 147.3046ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.321628    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.321628    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.321628    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.321628    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.331942    6816 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0307 23:22:10.344822    6816 system_pods.go:59] 24 kube-system pods found
	I0307 23:22:10.344822    6816 system_pods.go:61] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "etcd-ha-792400-m03" [048f57d4-7047-45b1-b865-e5768ce81ebf] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kindnet-nwgxl" [07d0d037-8522-4af4-9c41-d05bad3c2753] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-apiserver-ha-792400-m03" [f689ec77-3fff-48a7-bef0-6ca89dbae7fa] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-controller-manager-ha-792400-m03" [e58b980b-940b-4da9-868a-d5c6d7d8b8e3] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-2rxpp" [ea9a7d5a-b760-4056-ab38-cfa70276c427] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:22:10.344822    6816 system_pods.go:61] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-scheduler-ha-792400-m03" [daaf3e0b-85a8-4d7f-998b-3c07e04d010b] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "kube-vip-ha-792400-m03" [eb0f9382-0ea4-4cb2-9c1e-06d1f891ab99] Running
	I0307 23:22:10.345362    6816 system_pods.go:61] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:22:10.345362    6816 system_pods.go:74] duration metric: took 171.0377ms to wait for pod list to return data ...
	I0307 23:22:10.345362    6816 default_sa.go:34] waiting for default service account to be created ...
	I0307 23:22:10.509632    6816 request.go:629] Waited for 163.7269ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:22:10.509632    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/default/serviceaccounts
	I0307 23:22:10.509632    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.509632    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.509632    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.516303    6816 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0307 23:22:10.517001    6816 default_sa.go:45] found service account: "default"
	I0307 23:22:10.517069    6816 default_sa.go:55] duration metric: took 171.7054ms for default service account to be created ...
	I0307 23:22:10.517069    6816 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 23:22:10.712068    6816 request.go:629] Waited for 194.8586ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.712068    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/namespaces/kube-system/pods
	I0307 23:22:10.712068    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.712068    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.712068    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.720950    6816 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0307 23:22:10.731284    6816 system_pods.go:86] 24 kube-system pods found
	I0307 23:22:10.731284    6816 system_pods.go:89] "coredns-5dd5756b68-28rtr" [8f70fcea-fb5e-4bfe-a184-a7487922459d] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "coredns-5dd5756b68-rx9dg" [09969ba6-29bd-449a-8df2-85d52c1cca8e] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400" [6d4e209d-fc9c-4f71-a13f-b359b65ae7ad] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400-m02" [ed952253-b72b-4443-9189-ad1dcfabc268] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "etcd-ha-792400-m03" [048f57d4-7047-45b1-b865-e5768ce81ebf] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-7bztm" [a0918f25-6cde-462e-8f12-58c424e25ffa] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-fvx87" [e26e6f69-a3e8-4b89-9ec0-21959683db17] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kindnet-nwgxl" [07d0d037-8522-4af4-9c41-d05bad3c2753] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kube-apiserver-ha-792400" [2356c8e9-8a52-4bf2-b8e6-24974e45f15c] Running
	I0307 23:22:10.731284    6816 system_pods.go:89] "kube-apiserver-ha-792400-m02" [54d24fa6-cc12-47f7-89b8-07c35b710b9c] Running
	I0307 23:22:10.731862    6816 system_pods.go:89] "kube-apiserver-ha-792400-m03" [f689ec77-3fff-48a7-bef0-6ca89dbae7fa] Running
	I0307 23:22:10.731919    6816 system_pods.go:89] "kube-controller-manager-ha-792400" [57efa972-84b4-4614-b8e0-c6e3eeef55f7] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m02" [3a897c1b-a6a9-4ecb-abb4-f350789cde8a] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-controller-manager-ha-792400-m03" [e58b980b-940b-4da9-868a-d5c6d7d8b8e3] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-2rxpp" [ea9a7d5a-b760-4056-ab38-cfa70276c427] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-j6wd5" [bc09092e-551d-448f-af38-f8412bdcfe3a] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-proxy-zxmcc" [0a429b85-7b58-447e-963b-39976d48fba0] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400" [24c51162-87f0-4232-bc6a-32aef6110baa] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400-m02" [26d95aae-6bc6-4245-a5de-3848b6e4d1c2] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-scheduler-ha-792400-m03" [daaf3e0b-85a8-4d7f-998b-3c07e04d010b] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400" [31f2517d-5b88-4c07-87cd-66c667534a2f] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400-m02" [b41fc2d0-39a4-4fba-867d-371a5c918c90] Running
	I0307 23:22:10.731957    6816 system_pods.go:89] "kube-vip-ha-792400-m03" [eb0f9382-0ea4-4cb2-9c1e-06d1f891ab99] Running
	I0307 23:22:10.732181    6816 system_pods.go:89] "storage-provisioner" [d2cfae90-8302-4ce4-8292-de4938b0b9ae] Running
	I0307 23:22:10.732181    6816 system_pods.go:126] duration metric: took 215.1106ms to wait for k8s-apps to be running ...
	I0307 23:22:10.732181    6816 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 23:22:10.743666    6816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:22:10.768375    6816 system_svc.go:56] duration metric: took 36.1347ms WaitForService to wait for kubelet
	I0307 23:22:10.768375    6816 kubeadm.go:576] duration metric: took 19.376972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 23:22:10.768454    6816 node_conditions.go:102] verifying NodePressure condition ...
	I0307 23:22:10.916510    6816 request.go:629] Waited for 147.9773ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.58.169:8443/api/v1/nodes
	I0307 23:22:10.916741    6816 round_trippers.go:463] GET https://172.20.58.169:8443/api/v1/nodes
	I0307 23:22:10.916741    6816 round_trippers.go:469] Request Headers:
	I0307 23:22:10.916741    6816 round_trippers.go:473]     Accept: application/json, */*
	I0307 23:22:10.916741    6816 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0307 23:22:10.921833    6816 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0307 23:22:10.923806    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923806    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923871    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0307 23:22:10.923871    6816 node_conditions.go:123] node cpu capacity is 2
	I0307 23:22:10.923871    6816 node_conditions.go:105] duration metric: took 155.4152ms to run NodePressure ...
	I0307 23:22:10.923871    6816 start.go:240] waiting for startup goroutines ...
	I0307 23:22:10.923871    6816 start.go:254] writing updated cluster config ...
	I0307 23:22:10.935632    6816 ssh_runner.go:195] Run: rm -f paused
	I0307 23:22:11.075440    6816 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 23:22:11.078840    6816 out.go:177] * Done! kubectl is now configured to use "ha-792400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.708931596Z" level=info msg="shim disconnected" id=2daf2cbbe82d3a521289817e25889c3648a5173475004c4613e5691e15669dea namespace=moby
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.708999698Z" level=warning msg="cleaning up after shim disconnected" id=2daf2cbbe82d3a521289817e25889c3648a5173475004c4613e5691e15669dea namespace=moby
	Mar 07 23:18:11 ha-792400 dockerd[1314]: time="2024-03-07T23:18:11.709011999Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1308]: time="2024-03-07T23:18:12.178688324Z" level=info msg="ignoring event" container=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.182561552Z" level=info msg="shim disconnected" id=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.182833361Z" level=warning msg="cleaning up after shim disconnected" id=20e4ebbcc8a68e4542e27d912a6e3a14783afdf7df30d88386e8f4667dd8986e namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.183001067Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333036326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333357337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333466541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:12 ha-792400 dockerd[1314]: time="2024-03-07T23:18:12.333760750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308005359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308192665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308232667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:18:13 ha-792400 dockerd[1314]: time="2024-03-07T23:18:13.308681281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308664929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308886636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.308919337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 dockerd[1314]: time="2024-03-07T23:22:47.309344752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:47 ha-792400 cri-dockerd[1200]: time="2024-03-07T23:22:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd4b0e249592808d75765de1fc6ca7e6e072768f1ca17d13c7e995c224c3d131/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 07 23:22:48 ha-792400 cri-dockerd[1200]: time="2024-03-07T23:22:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.035744772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036125174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036373175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 07 23:22:49 ha-792400 dockerd[1314]: time="2024-03-07T23:22:49.036854078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb1b44317c3b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   fd4b0e2495928       busybox-5b5d89c9d6-wmtt9
	0315e442ba536       22aaebb38f4a9                                                                                         24 minutes ago      Running             kube-vip                  1                   2aa33ef112e26       kube-vip-ha-792400
	9538b967bece1       6e38f40d628db                                                                                         24 minutes ago      Running             storage-provisioner       1                   d74f2c3b71b39       storage-provisioner
	3fc0d637315e9       ead0a4a53df89                                                                                         27 minutes ago      Running             coredns                   0                   355749546e87f       coredns-5dd5756b68-28rtr
	0813d71e015b1       ead0a4a53df89                                                                                         27 minutes ago      Running             coredns                   0                   6c7c323c35782       coredns-5dd5756b68-rx9dg
	2daf2cbbe82d3       6e38f40d628db                                                                                         27 minutes ago      Exited              storage-provisioner       0                   d74f2c3b71b39       storage-provisioner
	acd6e0511261f       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   55a843de34893       kindnet-7bztm
	59baf1bee5fee       83f6cc407eed8                                                                                         28 minutes ago      Running             kube-proxy                0                   2ed7ae465f26f       kube-proxy-zxmcc
	20e4ebbcc8a68       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Exited              kube-vip                  0                   2aa33ef112e26       kube-vip-ha-792400
	45cfa4cc5c464       d058aa5ab969c                                                                                         28 minutes ago      Running             kube-controller-manager   0                   762cca51fa8d5       kube-controller-manager-ha-792400
	7f9766203c094       e3db313c6dbc0                                                                                         28 minutes ago      Running             kube-scheduler            0                   0e9ab11944533       kube-scheduler-ha-792400
	678da783bb32e       7fe0e6f37db33                                                                                         28 minutes ago      Running             kube-apiserver            0                   38ae89ab9f3cc       kube-apiserver-ha-792400
	8913a536cdd19       73deb9a3f7025                                                                                         28 minutes ago      Running             etcd                      0                   5aff95ebbe774       etcd-ha-792400
	
	
	==> coredns [0813d71e015b] <==
	[INFO] 10.244.1.2:58689 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107701s
	[INFO] 10.244.2.2:56925 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033465166s
	[INFO] 10.244.2.2:59838 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094701s
	[INFO] 10.244.2.2:55718 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121501s
	[INFO] 10.244.2.2:49031 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162501s
	[INFO] 10.244.2.2:44274 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184201s
	[INFO] 10.244.0.4:33345 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000070701s
	[INFO] 10.244.0.4:40600 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144201s
	[INFO] 10.244.0.4:59482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124501s
	[INFO] 10.244.1.2:48839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166401s
	[INFO] 10.244.1.2:49792 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000548s
	[INFO] 10.244.1.2:37296 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000634s
	[INFO] 10.244.1.2:52625 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167301s
	[INFO] 10.244.2.2:49914 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001306s
	[INFO] 10.244.2.2:49704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132401s
	[INFO] 10.244.2.2:58265 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000539s
	[INFO] 10.244.0.4:35424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000942s
	[INFO] 10.244.0.4:39973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109101s
	[INFO] 10.244.1.2:41011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185201s
	[INFO] 10.244.1.2:54371 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100001s
	[INFO] 10.244.1.2:46308 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000916s
	[INFO] 10.244.2.2:45164 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114001s
	[INFO] 10.244.0.4:58909 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130901s
	[INFO] 10.244.0.4:42049 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000709s
	[INFO] 10.244.0.4:46367 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000587904s
	
	
	==> coredns [3fc0d637315e] <==
	[INFO] 10.244.1.2:55904 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.148342111s
	[INFO] 10.244.1.2:57656 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166704326s
	[INFO] 10.244.2.2:38876 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.030427749s
	[INFO] 10.244.2.2:52435 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000162001s
	[INFO] 10.244.0.4:55008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000397102s
	[INFO] 10.244.1.2:49148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195601s
	[INFO] 10.244.1.2:41844 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.048285741s
	[INFO] 10.244.1.2:34705 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186701s
	[INFO] 10.244.1.2:47785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110001s
	[INFO] 10.244.2.2:49603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000967s
	[INFO] 10.244.2.2:51221 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010920754s
	[INFO] 10.244.2.2:51671 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000741s
	[INFO] 10.244.0.4:59914 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001468s
	[INFO] 10.244.0.4:40006 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230301s
	[INFO] 10.244.0.4:57558 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000147801s
	[INFO] 10.244.0.4:43569 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111701s
	[INFO] 10.244.0.4:42521 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001403s
	[INFO] 10.244.2.2:45389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177501s
	[INFO] 10.244.0.4:53457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276301s
	[INFO] 10.244.0.4:47763 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153501s
	[INFO] 10.244.1.2:50765 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000240302s
	[INFO] 10.244.2.2:41069 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251502s
	[INFO] 10.244.2.2:55299 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000172601s
	[INFO] 10.244.2.2:49701 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000874s
	[INFO] 10.244.0.4:51908 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169701s
	
	
	==> describe nodes <==
	Name:               ha-792400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T23_14_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:38:14 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:38:14 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:38:14 +0000   Thu, 07 Mar 2024 23:14:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:38:14 +0000   Thu, 07 Mar 2024 23:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.58.169
	  Hostname:    ha-792400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 518f80544d79436691eb013fb81341e0
	  System UUID:                4e875024-2316-c944-8dba-40e02e382e31
	  Boot ID:                    5470a58a-ec3e-4fa3-9eae-64bab2e66d3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wmtt9             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-28rtr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-5dd5756b68-rx9dg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-792400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-7bztm                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-792400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-792400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-zxmcc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-792400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-792400                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-792400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-792400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-792400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m   node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-792400 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-792400 event: Registered Node ha-792400 in Controller
	
	
	Name:               ha-792400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_07T23_18_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:18:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:38:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 07 Mar 2024 23:38:25 +0000   Thu, 07 Mar 2024 23:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 07 Mar 2024 23:38:25 +0000   Thu, 07 Mar 2024 23:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 07 Mar 2024 23:38:25 +0000   Thu, 07 Mar 2024 23:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 07 Mar 2024 23:38:25 +0000   Thu, 07 Mar 2024 23:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.50.199
	  Hostname:    ha-792400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 c06e0c0854ca4c2588f630a0a76a7d32
	  System UUID:                09cbc96a-b12f-7641-9990-7acdf96b88ef
	  Boot ID:                    07286d93-0fba-4108-933d-df1b049fc5bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-8vztn                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-792400-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-fvx87                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-792400-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-792400-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-j6wd5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-792400-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-792400-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        24m    kube-proxy       
	  Normal  RegisteredNode  24m    node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	  Normal  RegisteredNode  24m    node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	  Normal  RegisteredNode  20m    node-controller  Node ha-792400-m02 event: Registered Node ha-792400-m02 in Controller
	  Normal  NodeNotReady    3m32s  node-controller  Node ha-792400-m02 status is now: NodeNotReady
	
	
	Name:               ha-792400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_07T23_21_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:21:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:42:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:38:39 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:38:39 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:38:39 +0000   Thu, 07 Mar 2024 23:21:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:38:39 +0000   Thu, 07 Mar 2024 23:22:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.59.36
	  Hostname:    ha-792400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 2632989d3c70459290afc2ae7511010b
	  System UUID:                6840328b-e690-ab4b-a122-61c112570da5
	  Boot ID:                    b813436e-fd9d-48ec-9666-c69e1df60d6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dswbq                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 etcd-ha-792400-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-nwgxl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-792400-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-792400-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-2rxpp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-792400-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-792400-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        20m   kube-proxy       
	  Normal  RegisteredNode  20m   node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	  Normal  RegisteredNode  20m   node-controller  Node ha-792400-m03 event: Registered Node ha-792400-m03 in Controller
	
	
	Name:               ha-792400-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-792400-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=ha-792400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_07T23_26_52_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 23:26:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-792400-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 23:42:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 23:37:35 +0000   Thu, 07 Mar 2024 23:26:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 23:37:35 +0000   Thu, 07 Mar 2024 23:26:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 23:37:35 +0000   Thu, 07 Mar 2024 23:26:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 23:37:35 +0000   Thu, 07 Mar 2024 23:27:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.57.78
	  Hostname:    ha-792400-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 78ae8c967b8d420094630d0e82f9f2db
	  System UUID:                30d21e2b-35d1-1145-b7b7-2c1dd9fa27cd
	  Boot ID:                    2d311c76-3b81-4954-beb5-421afd50ba29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4jj9c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-proxy-4rh5h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  RegisteredNode           15m                node-controller  Node ha-792400-m04 event: Registered Node ha-792400-m04 in Controller
	  Normal  NodeHasSufficientMemory  15m (x5 over 15m)  kubelet          Node ha-792400-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x5 over 15m)  kubelet          Node ha-792400-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x5 over 15m)  kubelet          Node ha-792400-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-792400-m04 event: Registered Node ha-792400-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-792400-m04 event: Registered Node ha-792400-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-792400-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 7 23:13] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.147326] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +25.627823] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +0.082793] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.457977] systemd-fstab-generator[969]: Ignoring "noauto" option for root device
	[  +0.157469] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +0.199615] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +1.733500] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.179026] systemd-fstab-generator[1165]: Ignoring "noauto" option for root device
	[  +0.162468] systemd-fstab-generator[1178]: Ignoring "noauto" option for root device
	[  +0.236273] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[Mar 7 23:14] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.084202] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.582960] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +5.798457] systemd-fstab-generator[1751]: Ignoring "noauto" option for root device
	[  +0.084148] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.319481] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.524949] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[ +13.749359] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.845161] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.194244] kauditd_printk_skb: 14 callbacks suppressed
	[Mar 7 23:18] kauditd_printk_skb: 13 callbacks suppressed
	[  +8.952770] kauditd_printk_skb: 2 callbacks suppressed
	[Mar 7 23:33] hrtimer: interrupt took 2760059 ns
	
	
	==> etcd [8913a536cdd1] <==
	{"level":"warn","ts":"2024-03-07T23:42:38.738909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.749882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.755453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.773096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.784287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.794648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.797949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.80173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.809018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.820667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.823725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.831139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.83986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.845962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.85055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.860883Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.870587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.87996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.885934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.890972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.89947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.909871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.917819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:38.924085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-07T23:42:39.000293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2f3defd672ec6b35","from":"2f3defd672ec6b35","remote-peer-id":"918c2185a187a7c3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:42:39 up 30 min,  0 users,  load average: 0.16, 0.39, 0.50
	Linux ha-792400 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [acd6e0511261] <==
	I0307 23:42:08.761957       1 main.go:250] Node ha-792400-m04 has CIDR [10.244.3.0/24] 
	I0307 23:42:18.777694       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:42:18.777862       1 main.go:227] handling current node
	I0307 23:42:18.777897       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:42:18.777969       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:42:18.778381       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:42:18.778449       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:42:18.778667       1 main.go:223] Handling node with IPs: map[172.20.57.78:{}]
	I0307 23:42:18.778753       1 main.go:250] Node ha-792400-m04 has CIDR [10.244.3.0/24] 
	I0307 23:42:28.788738       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:42:28.788872       1 main.go:227] handling current node
	I0307 23:42:28.788888       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:42:28.788897       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:42:28.789025       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:42:28.789038       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:42:28.789094       1 main.go:223] Handling node with IPs: map[172.20.57.78:{}]
	I0307 23:42:28.789123       1 main.go:250] Node ha-792400-m04 has CIDR [10.244.3.0/24] 
	I0307 23:42:38.804456       1 main.go:223] Handling node with IPs: map[172.20.58.169:{}]
	I0307 23:42:38.804804       1 main.go:227] handling current node
	I0307 23:42:38.805057       1 main.go:223] Handling node with IPs: map[172.20.50.199:{}]
	I0307 23:42:38.805496       1 main.go:250] Node ha-792400-m02 has CIDR [10.244.1.0/24] 
	I0307 23:42:38.805848       1 main.go:223] Handling node with IPs: map[172.20.59.36:{}]
	I0307 23:42:38.806017       1 main.go:250] Node ha-792400-m03 has CIDR [10.244.2.0/24] 
	I0307 23:42:38.806535       1 main.go:223] Handling node with IPs: map[172.20.57.78:{}]
	I0307 23:42:38.806704       1 main.go:250] Node ha-792400-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [678da783bb32] <==
	Trace[1407565757]:  ---"Txn call completed" 604ms (23:21:57.841)]
	Trace[1407565757]: [605.557454ms] [605.557454ms] END
	I0307 23:27:00.042311       1 trace.go:236] Trace[1026403225]: "Get" accept:application/json, */*,audit-id:ece929e8-c7da-4cce-876c-a255b50da7dc,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Mar-2024 23:26:59.442) (total time: 599ms):
	Trace[1026403225]: ---"About to write a response" 599ms (23:27:00.042)
	Trace[1026403225]: [599.617473ms] [599.617473ms] END
	I0307 23:38:49.773811       1 trace.go:236] Trace[530886518]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.20.58.169,type:*v1.Endpoints,resource:apiServerIPInfo (07-Mar-2024 23:38:48.981) (total time: 792ms):
	Trace[530886518]: ---"initial value restored" 80ms (23:38:49.061)
	Trace[530886518]: ---"Transaction prepared" 264ms (23:38:49.326)
	Trace[530886518]: ---"Txn call completed" 446ms (23:38:49.773)
	Trace[530886518]: [792.291013ms] [792.291013ms] END
	I0307 23:38:54.422854       1 trace.go:236] Trace[1421794852]: "Update" accept:application/json, */*,audit-id:31292389-0537-4349-854f-8293559412a1,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-Mar-2024 23:38:53.909) (total time: 513ms):
	Trace[1421794852]: ["GuaranteedUpdate etcd3" audit-id:31292389-0537-4349-854f-8293559412a1,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 513ms (23:38:53.909)
	Trace[1421794852]:  ---"Txn call completed" 512ms (23:38:54.422)]
	Trace[1421794852]: [513.434968ms] [513.434968ms] END
	I0307 23:38:59.703098       1 trace.go:236] Trace[2133476760]: "Update" accept:application/json, */*,audit-id:a4e0458a-7f44-48a4-b716-9d0fc824b285,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-Mar-2024 23:38:59.158) (total time: 544ms):
	Trace[2133476760]: ["GuaranteedUpdate etcd3" audit-id:a4e0458a-7f44-48a4-b716-9d0fc824b285,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 544ms (23:38:59.158)
	Trace[2133476760]:  ---"Txn call completed" 543ms (23:38:59.702)]
	Trace[2133476760]: [544.871451ms] [544.871451ms] END
	I0307 23:38:59.706042       1 trace.go:236] Trace[684689375]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.20.58.169,type:*v1.Endpoints,resource:apiServerIPInfo (07-Mar-2024 23:38:58.982) (total time: 723ms):
	Trace[684689375]: ---"Transaction prepared" 170ms (23:38:59.155)
	Trace[684689375]: ---"Txn call completed" 550ms (23:38:59.705)
	Trace[684689375]: [723.993307ms] [723.993307ms] END
	I0307 23:39:05.516410       1 trace.go:236] Trace[2011795684]: "Get" accept:application/json, */*,audit-id:03c33238-db1e-4094-a7f7-d12359801ba2,client:172.20.58.169,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-Mar-2024 23:39:04.969) (total time: 546ms):
	Trace[2011795684]: ---"About to write a response" 546ms (23:39:05.516)
	Trace[2011795684]: [546.877121ms] [546.877121ms] END
	
	
	==> kube-controller-manager [45cfa4cc5c46] <==
	I0307 23:22:47.934513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.502µs"
	I0307 23:22:47.963147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="75.102µs"
	I0307 23:22:49.416115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.482965ms"
	I0307 23:22:49.416715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="83.901µs"
	I0307 23:22:49.694074       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.286621ms"
	I0307 23:22:49.694548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="207.501µs"
	I0307 23:22:49.969957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.528857ms"
	I0307 23:22:49.970632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.1µs"
	I0307 23:26:51.379735       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-792400-m04\" does not exist"
	I0307 23:26:51.434641       1 range_allocator.go:380] "Set node PodCIDR" node="ha-792400-m04" podCIDRs=["10.244.3.0/24"]
	I0307 23:26:51.440040       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fbxqg"
	I0307 23:26:51.458378       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ddpvn"
	I0307 23:26:51.608518       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-ddpvn"
	I0307 23:26:51.632058       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hq9sq"
	I0307 23:26:51.719760       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-nzxsh"
	I0307 23:26:51.784845       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-fbxqg"
	I0307 23:26:52.763077       1 event.go:307] "Event occurred" object="ha-792400-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-792400-m04 event: Registered Node ha-792400-m04 in Controller"
	I0307 23:26:52.789455       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-792400-m04"
	I0307 23:26:52.972544       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4jj9c"
	I0307 23:26:53.116979       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-rcz4k"
	I0307 23:26:53.171637       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-9gknn"
	I0307 23:27:08.869773       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-792400-m04"
	I0307 23:39:06.108326       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-792400-m04"
	I0307 23:39:06.826272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.577484ms"
	I0307 23:39:06.826595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="101.403µs"
	
	
	==> kube-proxy [59baf1bee5fe] <==
	I0307 23:14:34.497587       1 server_others.go:69] "Using iptables proxy"
	I0307 23:14:34.511825       1 node.go:141] Successfully retrieved node IP: 172.20.58.169
	I0307 23:14:34.566839       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0307 23:14:34.566913       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0307 23:14:34.573041       1 server_others.go:152] "Using iptables Proxier"
	I0307 23:14:34.573175       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 23:14:34.574019       1 server.go:846] "Version info" version="v1.28.4"
	I0307 23:14:34.574148       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 23:14:34.575768       1 config.go:188] "Starting service config controller"
	I0307 23:14:34.575817       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 23:14:34.575847       1 config.go:97] "Starting endpoint slice config controller"
	I0307 23:14:34.575856       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 23:14:34.576563       1 config.go:315] "Starting node config controller"
	I0307 23:14:34.576600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 23:14:34.677153       1 shared_informer.go:318] Caches are synced for node config
	I0307 23:14:34.677555       1 shared_informer.go:318] Caches are synced for service config
	I0307 23:14:34.677669       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7f9766203c09] <==
	I0307 23:22:46.378915       1 cache.go:518] "Pod was added to a different node than it was assumed" podKey="709c11ff-324c-401a-826a-318d1ca71260" pod="default/busybox-5b5d89c9d6-dswbq" assumedNode="ha-792400-m03" currentNode="ha-792400-m02"
	E0307 23:22:46.411059       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dswbq\": pod busybox-5b5d89c9d6-dswbq is already assigned to node \"ha-792400-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-dswbq" node="ha-792400-m02"
	E0307 23:22:46.417402       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 709c11ff-324c-401a-826a-318d1ca71260(default/busybox-5b5d89c9d6-dswbq) was assumed on ha-792400-m02 but assigned to ha-792400-m03"
	E0307 23:22:46.417855       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-dswbq\": pod busybox-5b5d89c9d6-dswbq is already assigned to node \"ha-792400-m03\"" pod="default/busybox-5b5d89c9d6-dswbq"
	I0307 23:22:46.418152       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-dswbq" node="ha-792400-m03"
	E0307 23:22:46.429715       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-pzdrp\": pod busybox-5b5d89c9d6-pzdrp is already assigned to node \"ha-792400\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-pzdrp" node="ha-792400"
	E0307 23:22:46.432130       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-pzdrp\": pod busybox-5b5d89c9d6-pzdrp is already assigned to node \"ha-792400\"" pod="default/busybox-5b5d89c9d6-pzdrp"
	E0307 23:26:51.484509       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ddpvn\": pod kube-proxy-ddpvn is already assigned to node \"ha-792400-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ddpvn" node="ha-792400-m04"
	E0307 23:26:51.484610       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 84611f4f-293d-480b-8cf2-8b2666e82237(kube-system/kube-proxy-ddpvn) wasn't assumed so cannot be forgotten"
	E0307 23:26:51.484642       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ddpvn\": pod kube-proxy-ddpvn is already assigned to node \"ha-792400-m04\"" pod="kube-system/kube-proxy-ddpvn"
	I0307 23:26:51.485225       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ddpvn" node="ha-792400-m04"
	E0307 23:26:51.486213       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fbxqg\": pod kindnet-fbxqg is already assigned to node \"ha-792400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fbxqg" node="ha-792400-m04"
	E0307 23:26:51.486307       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 23c6220f-70cd-46a0-94f0-a0616f7ed282(kube-system/kindnet-fbxqg) wasn't assumed so cannot be forgotten"
	E0307 23:26:51.486328       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fbxqg\": pod kindnet-fbxqg is already assigned to node \"ha-792400-m04\"" pod="kube-system/kindnet-fbxqg"
	I0307 23:26:51.486594       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fbxqg" node="ha-792400-m04"
	E0307 23:26:53.014733       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4jj9c\": pod kindnet-4jj9c is already assigned to node \"ha-792400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4jj9c" node="ha-792400-m04"
	E0307 23:26:53.014888       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4jj9c\": pod kindnet-4jj9c is already assigned to node \"ha-792400-m04\"" pod="kube-system/kindnet-4jj9c"
	E0307 23:26:53.077840       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9gknn\": pod kindnet-9gknn is already assigned to node \"ha-792400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-9gknn" node="ha-792400-m04"
	E0307 23:26:53.081022       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e089508d-a95e-4081-aa22-a5c6d09cb424(kube-system/kindnet-9gknn) wasn't assumed so cannot be forgotten"
	E0307 23:26:53.081151       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9gknn\": pod kindnet-9gknn is already assigned to node \"ha-792400-m04\"" pod="kube-system/kindnet-9gknn"
	I0307 23:26:53.081213       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9gknn" node="ha-792400-m04"
	E0307 23:26:53.115103       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rcz4k\": pod kindnet-rcz4k is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-rcz4k" node="ha-792400-m04"
	E0307 23:26:53.117226       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 79ebea70-6ea9-4b8d-b93f-067f192519f3(kube-system/kindnet-rcz4k) wasn't assumed so cannot be forgotten"
	E0307 23:26:53.117308       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rcz4k\": pod kindnet-rcz4k is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-rcz4k"
	I0307 23:26:53.117330       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rcz4k" node="ha-792400-m04"
	
	
	==> kubelet <==
	Mar 07 23:38:20 ha-792400 kubelet[2501]: E0307 23:38:20.630686    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:38:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:38:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:38:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:38:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:39:20 ha-792400 kubelet[2501]: E0307 23:39:20.628194    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:39:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:39:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:39:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:39:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:40:20 ha-792400 kubelet[2501]: E0307 23:40:20.627896    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:40:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:40:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:40:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:40:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:41:20 ha-792400 kubelet[2501]: E0307 23:41:20.627756    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:41:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:41:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:41:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:41:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 07 23:42:20 ha-792400 kubelet[2501]: E0307 23:42:20.627630    2501 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 07 23:42:20 ha-792400 kubelet[2501]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 07 23:42:20 ha-792400 kubelet[2501]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 07 23:42:20 ha-792400 kubelet[2501]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 07 23:42:20 ha-792400 kubelet[2501]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:42:31.108251   11696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-792400 -n ha-792400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-792400 -n ha-792400: (11.7504166s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-792400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (185.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (52.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- sh -c "ping -c 1 172.20.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.427654s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:17:26.282414    6908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.48.1) from pod (busybox-5b5d89c9d6-ctt42): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- sh -c "ping -c 1 172.20.48.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- sh -c "ping -c 1 172.20.48.1": exit status 1 (10.4288658s)

                                                
                                                
-- stdout --
	PING 172.20.48.1 (172.20.48.1): 56 data bytes
	
	--- 172.20.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:17:37.177476    3328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.48.1) from pod (busybox-5b5d89c9d6-j7ck4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400: (10.6093405s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25: (7.5032784s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-590900 ssh -- ls                    | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:07 UTC | 08 Mar 24 00:07 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-590900                           | mount-start-1-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:07 UTC | 08 Mar 24 00:07 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-590900 ssh -- ls                    | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:07 UTC | 08 Mar 24 00:08 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-590900                           | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:08 UTC | 08 Mar 24 00:08 UTC |
	| start   | -p mount-start-2-590900                           | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:08 UTC | 08 Mar 24 00:10 UTC |
	| mount   | C:\Users\jenkins.minikube7:/minikube-host         | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:10 UTC |                     |
	|         | --profile mount-start-2-590900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-590900 ssh -- ls                    | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:10 UTC | 08 Mar 24 00:10 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-590900                           | mount-start-2-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:10 UTC | 08 Mar 24 00:10 UTC |
	| delete  | -p mount-start-1-590900                           | mount-start-1-590900 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:10 UTC | 08 Mar 24 00:10 UTC |
	| start   | -p multinode-397400                               | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:10 UTC | 08 Mar 24 00:16 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- apply -f                   | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- rollout                    | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- get pods -o                | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- get pods -o                | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-ctt42 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-j7ck4 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-ctt42 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-j7ck4 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-ctt42 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-j7ck4 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- get pods -o                | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-ctt42                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC |                     |
	|         | busybox-5b5d89c9d6-ctt42 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.48.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC | 08 Mar 24 00:17 UTC |
	|         | busybox-5b5d89c9d6-j7ck4                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-397400 -- exec                       | multinode-397400     | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:17 UTC |                     |
	|         | busybox-5b5d89c9d6-j7ck4 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.48.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 00:10:49
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 00:10:49.978283   12824 out.go:291] Setting OutFile to fd 988 ...
	I0308 00:10:49.978754   12824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:10:49.978754   12824 out.go:304] Setting ErrFile to fd 956...
	I0308 00:10:49.978754   12824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:10:50.001747   12824 out.go:298] Setting JSON to false
	I0308 00:10:50.004737   12824 start.go:129] hostinfo: {"hostname":"minikube7","uptime":15604,"bootTime":1709841045,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0308 00:10:50.004737   12824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0308 00:10:50.013813   12824 out.go:177] * [multinode-397400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0308 00:10:50.022190   12824 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:10:50.020804   12824 notify.go:220] Checking for updates...
	I0308 00:10:50.029663   12824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 00:10:50.035275   12824 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0308 00:10:50.040752   12824 out.go:177]   - MINIKUBE_LOCATION=16214
	I0308 00:10:50.046519   12824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 00:10:50.051692   12824 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:10:50.052259   12824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 00:10:54.856326   12824 out.go:177] * Using the hyperv driver based on user configuration
	I0308 00:10:54.860029   12824 start.go:297] selected driver: hyperv
	I0308 00:10:54.860029   12824 start.go:901] validating driver "hyperv" against <nil>
	I0308 00:10:54.860029   12824 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 00:10:54.905037   12824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0308 00:10:54.906449   12824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:10:54.906449   12824 cni.go:84] Creating CNI manager for ""
	I0308 00:10:54.906449   12824 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0308 00:10:54.906449   12824 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0308 00:10:54.906449   12824 start.go:340] cluster config:
	{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:10:54.907044   12824 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 00:10:54.910750   12824 out.go:177] * Starting "multinode-397400" primary control-plane node in "multinode-397400" cluster
	I0308 00:10:54.912544   12824 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:10:54.913321   12824 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0308 00:10:54.913321   12824 cache.go:56] Caching tarball of preloaded images
	I0308 00:10:54.913693   12824 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:10:54.913986   12824 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:10:54.913986   12824 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:10:54.913986   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json: {Name:mk2562594099651a3688500c189a6a1adc242169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:10:54.915704   12824 start.go:360] acquireMachinesLock for multinode-397400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:10:54.916299   12824 start.go:364] duration metric: took 68.2µs to acquireMachinesLock for "multinode-397400"
	I0308 00:10:54.916299   12824 start.go:93] Provisioning new machine with config: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 00:10:54.916299   12824 start.go:125] createHost starting for "" (driver="hyperv")
	I0308 00:10:54.919154   12824 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 00:10:54.919797   12824 start.go:159] libmachine.API.Create for "multinode-397400" (driver="hyperv")
	I0308 00:10:54.920071   12824 client.go:168] LocalClient.Create starting
	I0308 00:10:54.920106   12824 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 00:10:54.920926   12824 main.go:141] libmachine: Decoding PEM data...
	I0308 00:10:54.920926   12824 main.go:141] libmachine: Parsing certificate...
	I0308 00:10:54.920926   12824 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 00:10:54.921607   12824 main.go:141] libmachine: Decoding PEM data...
	I0308 00:10:54.921607   12824 main.go:141] libmachine: Parsing certificate...
	I0308 00:10:54.921607   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 00:10:56.766942   12824 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 00:10:56.766942   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:10:56.766942   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 00:10:58.304440   12824 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 00:10:58.304440   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:10:58.304440   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 00:10:59.626292   12824 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 00:10:59.626292   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:10:59.626292   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 00:11:02.833257   12824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 00:11:02.833618   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:02.836924   12824 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 00:11:03.332214   12824 main.go:141] libmachine: Creating SSH key...
	I0308 00:11:03.619178   12824 main.go:141] libmachine: Creating VM...
	I0308 00:11:03.619178   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 00:11:06.215044   12824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 00:11:06.215972   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:06.215972   12824 main.go:141] libmachine: Using switch "Default Switch"
	I0308 00:11:06.216154   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 00:11:07.754613   12824 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 00:11:07.754613   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:07.754613   12824 main.go:141] libmachine: Creating VHD
	I0308 00:11:07.754613   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 00:11:11.194219   12824 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A16CACDC-1906-44B8-A419-C82C72CAFFDA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 00:11:11.195284   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:11.195284   12824 main.go:141] libmachine: Writing magic tar header
	I0308 00:11:11.195284   12824 main.go:141] libmachine: Writing SSH key tar header
	I0308 00:11:11.204315   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 00:11:14.131339   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:14.131507   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:14.131575   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\disk.vhd' -SizeBytes 20000MB
	I0308 00:11:16.430861   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:16.430861   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:16.431862   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-397400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0308 00:11:19.684230   12824 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-397400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 00:11:19.685054   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:19.685174   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-397400 -DynamicMemoryEnabled $false
	I0308 00:11:21.676278   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:21.677010   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:21.677010   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-397400 -Count 2
	I0308 00:11:23.633683   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:23.633683   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:23.633789   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-397400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\boot2docker.iso'
	I0308 00:11:25.929910   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:25.929910   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:25.929910   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-397400 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\disk.vhd'
	I0308 00:11:28.331106   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:28.331106   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:28.331106   12824 main.go:141] libmachine: Starting VM...
	I0308 00:11:28.332045   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400
	I0308 00:11:31.144659   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:31.144659   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:31.145259   12824 main.go:141] libmachine: Waiting for host to start...
	I0308 00:11:31.145315   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:33.137445   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:33.137445   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:33.137549   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:11:35.388824   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:35.388880   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:36.403262   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:38.413126   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:38.413559   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:38.413559   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:11:40.719089   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:40.720086   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:41.726383   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:43.692106   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:43.692106   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:43.692106   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:11:45.952016   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:45.953039   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:46.957603   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:48.956732   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:48.956732   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:48.957554   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:11:51.261244   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:11:51.261244   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:52.269354   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:54.388057   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:54.388057   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:54.388057   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:11:56.701503   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:11:56.701503   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:56.701630   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:11:58.596471   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:11:58.596664   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:11:58.596664   12824 machine.go:94] provisionDockerMachine start ...
	I0308 00:11:58.596811   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:00.534675   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:00.535125   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:00.535125   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:02.854631   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:02.854692   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:02.859885   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:02.869824   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:02.869824   12824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:12:03.005673   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:12:03.005673   12824 buildroot.go:166] provisioning hostname "multinode-397400"
	I0308 00:12:03.005673   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:04.874849   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:04.874849   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:04.875133   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:07.183982   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:07.183982   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:07.189782   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:07.189947   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:07.190517   12824 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400 && echo "multinode-397400" | sudo tee /etc/hostname
	I0308 00:12:07.344931   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400
	
	I0308 00:12:07.344931   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:09.227538   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:09.227538   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:09.228394   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:11.544244   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:11.544771   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:11.549832   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:11.549882   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:11.549882   12824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:12:11.687268   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:12:11.687268   12824 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:12:11.687268   12824 buildroot.go:174] setting up certificates
	I0308 00:12:11.687268   12824 provision.go:84] configureAuth start
	I0308 00:12:11.687268   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:13.620723   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:13.620723   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:13.620723   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:15.901961   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:15.902503   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:15.902503   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:17.801161   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:17.801161   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:17.801549   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:20.060495   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:20.061071   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:20.061202   12824 provision.go:143] copyHostCerts
	I0308 00:12:20.061449   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:12:20.061967   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:12:20.062036   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:12:20.062739   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:12:20.064562   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:12:20.064699   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:12:20.064699   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:12:20.065325   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:12:20.066513   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:12:20.066982   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:12:20.067042   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:12:20.067128   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:12:20.068450   12824 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400 san=[127.0.0.1 172.20.48.212 localhost minikube multinode-397400]
	I0308 00:12:20.379574   12824 provision.go:177] copyRemoteCerts
	I0308 00:12:20.392149   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:12:20.392149   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:22.280167   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:22.280167   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:22.280940   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:24.511176   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:24.511176   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:24.511176   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:12:24.611107   12824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2189181s)
	I0308 00:12:24.612023   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:12:24.612152   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:12:24.652454   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:12:24.652684   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0308 00:12:24.691885   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:12:24.692044   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 00:12:24.733259   12824 provision.go:87] duration metric: took 13.0458667s to configureAuth
	I0308 00:12:24.733259   12824 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:12:24.733826   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:12:24.733936   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:26.668790   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:26.668790   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:26.669397   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:28.925486   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:28.926180   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:28.930603   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:28.931086   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:28.931148   12824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:12:29.054081   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:12:29.054141   12824 buildroot.go:70] root file system type: tmpfs
	I0308 00:12:29.054313   12824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:12:29.054454   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:30.936927   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:30.937553   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:30.937621   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:33.212553   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:33.212968   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:33.217401   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:33.218211   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:33.218211   12824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:12:33.364550   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:12:33.364671   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:35.234703   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:35.235494   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:35.235589   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:37.494441   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:37.494441   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:37.500356   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:37.500547   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:37.500547   12824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:12:38.656099   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:12:38.656099   12824 machine.go:97] duration metric: took 40.0590543s to provisionDockerMachine
	I0308 00:12:38.656099   12824 client.go:171] duration metric: took 1m43.7350072s to LocalClient.Create
	I0308 00:12:38.656099   12824 start.go:167] duration metric: took 1m43.7353162s to libmachine.API.Create "multinode-397400"
	I0308 00:12:38.656099   12824 start.go:293] postStartSetup for "multinode-397400" (driver="hyperv")
	I0308 00:12:38.656099   12824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:12:38.668295   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:12:38.668295   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:40.548986   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:40.549712   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:40.549712   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:42.805275   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:42.805845   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:42.805845   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:12:42.916965   12824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2486305s)
	I0308 00:12:42.928460   12824 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:12:42.934977   12824 command_runner.go:130] > NAME=Buildroot
	I0308 00:12:42.934977   12824 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:12:42.934977   12824 command_runner.go:130] > ID=buildroot
	I0308 00:12:42.934977   12824 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:12:42.934977   12824 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:12:42.935112   12824 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:12:42.935211   12824 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:12:42.935211   12824 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:12:42.936377   12824 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:12:42.936448   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:12:42.946775   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:12:42.962875   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:12:43.004779   12824 start.go:296] duration metric: took 4.3485668s for postStartSetup
	I0308 00:12:43.007233   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:44.867366   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:44.867979   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:44.868101   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:47.128362   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:47.129203   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:47.129340   12824 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:12:47.132070   12824 start.go:128] duration metric: took 1m52.2147055s to createHost
	I0308 00:12:47.132070   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:49.036121   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:49.036195   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:49.036195   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:51.339147   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:51.339421   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:51.346693   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:51.346868   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:51.346868   12824 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:12:51.473965   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709856771.488849171
	
	I0308 00:12:51.473965   12824 fix.go:216] guest clock: 1709856771.488849171
	I0308 00:12:51.473965   12824 fix.go:229] Guest: 2024-03-08 00:12:51.488849171 +0000 UTC Remote: 2024-03-08 00:12:47.1320708 +0000 UTC m=+117.320304001 (delta=4.356778371s)
	I0308 00:12:51.473965   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:53.349696   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:53.349696   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:53.350109   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:55.596540   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:55.596632   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:55.601896   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:12:55.602060   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.48.212 22 <nil> <nil>}
	I0308 00:12:55.602060   12824 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709856771
	I0308 00:12:55.743538   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:12:51 UTC 2024
	
	I0308 00:12:55.743538   12824 fix.go:236] clock set: Fri Mar  8 00:12:51 UTC 2024
	 (err=<nil>)
	I0308 00:12:55.743664   12824 start.go:83] releasing machines lock for "multinode-397400", held for 2m0.8262172s
	I0308 00:12:55.743890   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:57.686595   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:12:57.687134   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:57.687197   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:12:59.949848   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:12:59.950442   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:12:59.954140   12824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:12:59.954255   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:12:59.966904   12824 ssh_runner.go:195] Run: cat /version.json
	I0308 00:12:59.966904   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:02.012175   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:02.012175   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:02.012175   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:13:02.012397   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:02.012397   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:02.012513   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:13:04.485238   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:13:04.485289   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:04.485356   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:13:04.507817   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:13:04.507817   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:04.508861   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:13:04.582615   12824 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0308 00:13:04.582865   12824 ssh_runner.go:235] Completed: cat /version.json: (4.6159173s)
	I0308 00:13:04.593534   12824 ssh_runner.go:195] Run: systemctl --version
	I0308 00:13:04.690928   12824 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:13:04.691000   12824 command_runner.go:130] > systemd 252 (252)
	I0308 00:13:04.691074   12824 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0308 00:13:04.691074   12824 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7368893s)
	I0308 00:13:04.702387   12824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:13:04.710062   12824 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0308 00:13:04.710169   12824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:13:04.721441   12824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:13:04.745553   12824 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:13:04.745755   12824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:13:04.745883   12824 start.go:494] detecting cgroup driver to use...
	I0308 00:13:04.746064   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:13:04.780267   12824 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:13:04.791020   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:13:04.817911   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:13:04.834453   12824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:13:04.843947   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:13:04.873488   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:13:04.898989   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:13:04.924140   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:13:04.950150   12824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:13:04.977096   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:13:05.010018   12824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:13:05.025778   12824 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:13:05.037010   12824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:13:05.063455   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:05.227053   12824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:13:05.254639   12824 start.go:494] detecting cgroup driver to use...
	I0308 00:13:05.265258   12824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:13:05.282869   12824 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:13:05.282869   12824 command_runner.go:130] > [Unit]
	I0308 00:13:05.282869   12824 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:13:05.282957   12824 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:13:05.282957   12824 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:13:05.282957   12824 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:13:05.282957   12824 command_runner.go:130] > StartLimitBurst=3
	I0308 00:13:05.282957   12824 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:13:05.283012   12824 command_runner.go:130] > [Service]
	I0308 00:13:05.283012   12824 command_runner.go:130] > Type=notify
	I0308 00:13:05.283012   12824 command_runner.go:130] > Restart=on-failure
	I0308 00:13:05.283077   12824 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:13:05.283077   12824 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:13:05.283256   12824 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:13:05.283256   12824 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:13:05.283256   12824 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:13:05.283256   12824 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:13:05.283322   12824 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:13:05.283341   12824 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:13:05.283402   12824 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:13:05.283425   12824 command_runner.go:130] > ExecStart=
	I0308 00:13:05.283454   12824 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:13:05.283535   12824 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:13:05.283535   12824 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:13:05.283567   12824 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:13:05.283714   12824 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:13:05.283714   12824 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:13:05.283714   12824 command_runner.go:130] > LimitCORE=infinity
	I0308 00:13:05.283746   12824 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:13:05.283746   12824 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:13:05.283746   12824 command_runner.go:130] > TasksMax=infinity
	I0308 00:13:05.283810   12824 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:13:05.283810   12824 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:13:05.283835   12824 command_runner.go:130] > Delegate=yes
	I0308 00:13:05.283835   12824 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:13:05.283874   12824 command_runner.go:130] > KillMode=process
	I0308 00:13:05.283874   12824 command_runner.go:130] > [Install]
	I0308 00:13:05.283897   12824 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:13:05.294510   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:13:05.322723   12824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:13:05.363723   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:13:05.393973   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:13:05.429215   12824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:13:05.483965   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:13:05.503403   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:13:05.531418   12824 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:13:05.541452   12824 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:13:05.546426   12824 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:13:05.556622   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:13:05.571755   12824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:13:05.608409   12824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:13:05.777399   12824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:13:05.936900   12824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:13:05.937056   12824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:13:05.973520   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:06.151561   12824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:13:07.670086   12824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.51851s)
	I0308 00:13:07.680508   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:13:07.712251   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:13:07.741897   12824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:13:07.915429   12824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:13:08.080893   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:08.252930   12824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:13:08.285807   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:13:08.314859   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:08.490284   12824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:13:08.583935   12824 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:13:08.595408   12824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:13:08.602858   12824 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:13:08.602858   12824 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:13:08.602858   12824 command_runner.go:130] > Device: 0,22	Inode: 883         Links: 1
	I0308 00:13:08.602858   12824 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:13:08.602858   12824 command_runner.go:130] > Access: 2024-03-08 00:13:08.534273421 +0000
	I0308 00:13:08.602858   12824 command_runner.go:130] > Modify: 2024-03-08 00:13:08.534273421 +0000
	I0308 00:13:08.602858   12824 command_runner.go:130] > Change: 2024-03-08 00:13:08.537273432 +0000
	I0308 00:13:08.602858   12824 command_runner.go:130] >  Birth: -
	I0308 00:13:08.602858   12824 start.go:562] Will wait 60s for crictl version
	I0308 00:13:08.614447   12824 ssh_runner.go:195] Run: which crictl
	I0308 00:13:08.618721   12824 command_runner.go:130] > /usr/bin/crictl
	I0308 00:13:08.632297   12824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:13:08.696324   12824 command_runner.go:130] > Version:  0.1.0
	I0308 00:13:08.696501   12824 command_runner.go:130] > RuntimeName:  docker
	I0308 00:13:08.696501   12824 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:13:08.696501   12824 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:13:08.696501   12824 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:13:08.704940   12824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:13:08.734276   12824 command_runner.go:130] > 24.0.7
	I0308 00:13:08.742580   12824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:13:08.770938   12824 command_runner.go:130] > 24.0.7
	I0308 00:13:08.775425   12824 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:13:08.775587   12824 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:13:08.779538   12824 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:13:08.779538   12824 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:13:08.779538   12824 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:13:08.779538   12824 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:13:08.781985   12824 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:13:08.781985   12824 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:13:08.790667   12824 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:13:08.796642   12824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:13:08.816984   12824 kubeadm.go:877] updating cluster {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 00:13:08.817649   12824 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:13:08.825434   12824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:13:08.847776   12824 docker.go:685] Got preloaded images: 
	I0308 00:13:08.847776   12824 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 00:13:08.858070   12824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 00:13:08.872535   12824 command_runner.go:139] > {"Repositories":{}}
	I0308 00:13:08.882682   12824 ssh_runner.go:195] Run: which lz4
	I0308 00:13:08.886678   12824 command_runner.go:130] > /usr/bin/lz4
	I0308 00:13:08.886678   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0308 00:13:08.897964   12824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 00:13:08.904820   12824 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 00:13:08.905153   12824 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 00:13:08.905793   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 00:13:11.275203   12824 docker.go:649] duration metric: took 2.3875276s to copy over tarball
	I0308 00:13:11.287714   12824 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 00:13:21.561966   12824 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.2741546s)
	I0308 00:13:21.561966   12824 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 00:13:21.623883   12824 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 00:13:21.639808   12824 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0308 00:13:21.639808   12824 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 00:13:21.683376   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:21.868455   12824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:13:24.108976   12824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2404994s)
	I0308 00:13:24.117688   12824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:13:24.149038   12824 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0308 00:13:24.149863   12824 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0308 00:13:24.149863   12824 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:13:24.150012   12824 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 00:13:24.150085   12824 cache_images.go:84] Images are preloaded, skipping loading
	I0308 00:13:24.150125   12824 kubeadm.go:928] updating node { 172.20.48.212 8443 v1.28.4 docker true true} ...
	I0308 00:13:24.150325   12824 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.48.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:13:24.159381   12824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 00:13:24.194306   12824 command_runner.go:130] > cgroupfs
	I0308 00:13:24.195613   12824 cni.go:84] Creating CNI manager for ""
	I0308 00:13:24.195660   12824 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 00:13:24.195660   12824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 00:13:24.195709   12824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.48.212 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-397400 NodeName:multinode-397400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.48.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.48.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 00:13:24.196006   12824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.48.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-397400"
	  kubeletExtraArgs:
	    node-ip: 172.20.48.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.48.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 00:13:24.206986   12824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:13:24.222603   12824 command_runner.go:130] > kubeadm
	I0308 00:13:24.222603   12824 command_runner.go:130] > kubectl
	I0308 00:13:24.222603   12824 command_runner.go:130] > kubelet
	I0308 00:13:24.223401   12824 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:13:24.233921   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 00:13:24.249086   12824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0308 00:13:24.276690   12824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:13:24.309441   12824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0308 00:13:24.347639   12824 ssh_runner.go:195] Run: grep 172.20.48.212	control-plane.minikube.internal$ /etc/hosts
	I0308 00:13:24.352314   12824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.48.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:13:24.379800   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:24.563883   12824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:13:24.587850   12824 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.48.212
	I0308 00:13:24.587850   12824 certs.go:194] generating shared ca certs ...
	I0308 00:13:24.588874   12824 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:24.588874   12824 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:13:24.590064   12824 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:13:24.590275   12824 certs.go:256] generating profile certs ...
	I0308 00:13:24.590985   12824 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.key
	I0308 00:13:24.591144   12824 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.crt with IP's: []
	I0308 00:13:25.003225   12824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.crt ...
	I0308 00:13:25.003225   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.crt: {Name:mkb6a7e8ba7fc970d9a8ae9c81b0b00f4342723e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.005225   12824 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.key ...
	I0308 00:13:25.005225   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.key: {Name:mk1a89dfbaa99dffc271753c3c8cf4708a8aa39f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.006608   12824 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.f1bfdd01
	I0308 00:13:25.006608   12824 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.f1bfdd01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.48.212]
	I0308 00:13:25.283489   12824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.f1bfdd01 ...
	I0308 00:13:25.283489   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.f1bfdd01: {Name:mk13228897d2294ddd03a731ad2092832c336202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.284007   12824 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.f1bfdd01 ...
	I0308 00:13:25.284007   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.f1bfdd01: {Name:mk6d4323141f9803d86eb6b491b76622b6b199e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.285180   12824 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.f1bfdd01 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt
	I0308 00:13:25.296806   12824 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.f1bfdd01 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key
	I0308 00:13:25.297753   12824 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key
	I0308 00:13:25.298757   12824 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt with IP's: []
	I0308 00:13:25.663602   12824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt ...
	I0308 00:13:25.663602   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt: {Name:mk5fa23ffe5f399d881fdfcd40d686be7f9afb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.665060   12824 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key ...
	I0308 00:13:25.665060   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key: {Name:mk547e580cdf8c75ffb86179dfb26073b7a90fd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:25.665339   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:13:25.666233   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:13:25.666233   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:13:25.666233   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:13:25.666233   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 00:13:25.666872   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 00:13:25.667011   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 00:13:25.675894   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 00:13:25.676787   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:13:25.676787   12824 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:13:25.676787   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:13:25.676787   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:13:25.677789   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:13:25.677789   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:13:25.678440   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:13:25.678440   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:13:25.678440   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:13:25.678440   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:13:25.679346   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:13:25.723479   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:13:25.766243   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:13:25.806243   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:13:25.846846   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 00:13:25.889411   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 00:13:25.929243   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 00:13:25.968745   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 00:13:26.013751   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:13:26.058302   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:13:26.101661   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:13:26.148108   12824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 00:13:26.191055   12824 ssh_runner.go:195] Run: openssl version
	I0308 00:13:26.197674   12824 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:13:26.209591   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:13:26.238895   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:13:26.244808   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:13:26.245252   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:13:26.255276   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:13:26.262114   12824 command_runner.go:130] > 51391683
	I0308 00:13:26.274092   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:13:26.302811   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:13:26.332689   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:13:26.339081   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:13:26.339081   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:13:26.349274   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:13:26.356348   12824 command_runner.go:130] > 3ec20f2e
	I0308 00:13:26.368800   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:13:26.395944   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:13:26.421718   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:13:26.430152   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:13:26.430229   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:13:26.441718   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:13:26.448745   12824 command_runner.go:130] > b5213941
	I0308 00:13:26.459041   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:13:26.486649   12824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:13:26.493009   12824 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:13:26.493784   12824 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:13:26.493837   12824 kubeadm.go:391] StartCluster: {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:13:26.502053   12824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 00:13:26.535398   12824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 00:13:26.555530   12824 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0308 00:13:26.555530   12824 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0308 00:13:26.556498   12824 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0308 00:13:26.565874   12824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 00:13:26.594060   12824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 00:13:26.608866   12824 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0308 00:13:26.609319   12824 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0308 00:13:26.609319   12824 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0308 00:13:26.609405   12824 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:13:26.609656   12824 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:13:26.609707   12824 kubeadm.go:156] found existing configuration files:
	
	I0308 00:13:26.619941   12824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 00:13:26.634519   12824 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:13:26.634902   12824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:13:26.645503   12824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 00:13:26.669553   12824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 00:13:26.686466   12824 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:13:26.686926   12824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:13:26.697778   12824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 00:13:26.723455   12824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 00:13:26.738755   12824 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:13:26.739123   12824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:13:26.750195   12824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 00:13:26.782448   12824 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 00:13:26.798784   12824 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:13:26.798784   12824 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:13:26.809231   12824 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 00:13:26.824949   12824 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 00:13:27.105466   12824 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0308 00:13:27.105532   12824 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 00:13:27.105765   12824 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:13:27.105901   12824 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 00:13:27.320984   12824 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 00:13:27.320984   12824 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 00:13:27.320984   12824 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 00:13:27.320984   12824 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 00:13:27.320984   12824 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 00:13:27.320984   12824 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 00:13:27.677999   12824 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 00:13:27.684487   12824 out.go:204]   - Generating certificates and keys ...
	I0308 00:13:27.677999   12824 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 00:13:27.684686   12824 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 00:13:27.684686   12824 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0308 00:13:27.684686   12824 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0308 00:13:27.684686   12824 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 00:13:27.782407   12824 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 00:13:27.782445   12824 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 00:13:27.913570   12824 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 00:13:27.913605   12824 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0308 00:13:28.179919   12824 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 00:13:28.180043   12824 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0308 00:13:28.253609   12824 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0308 00:13:28.253609   12824 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 00:13:28.443207   12824 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0308 00:13:28.443207   12824 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 00:13:28.443207   12824 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-397400] and IPs [172.20.48.212 127.0.0.1 ::1]
	I0308 00:13:28.443207   12824 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-397400] and IPs [172.20.48.212 127.0.0.1 ::1]
	I0308 00:13:28.627307   12824 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0308 00:13:28.627374   12824 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 00:13:28.627703   12824 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-397400] and IPs [172.20.48.212 127.0.0.1 ::1]
	I0308 00:13:28.627703   12824 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-397400] and IPs [172.20.48.212 127.0.0.1 ::1]
	I0308 00:13:28.837879   12824 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 00:13:28.837879   12824 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 00:13:29.026025   12824 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 00:13:29.026450   12824 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 00:13:29.290665   12824 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 00:13:29.290665   12824 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0308 00:13:29.291230   12824 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 00:13:29.291230   12824 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 00:13:29.584357   12824 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 00:13:29.584357   12824 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 00:13:29.819071   12824 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 00:13:29.819331   12824 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 00:13:29.909656   12824 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 00:13:29.910096   12824 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 00:13:30.043149   12824 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 00:13:30.043206   12824 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 00:13:30.044184   12824 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 00:13:30.044184   12824 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 00:13:30.051982   12824 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 00:13:30.051982   12824 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 00:13:30.056077   12824 out.go:204]   - Booting up control plane ...
	I0308 00:13:30.056342   12824 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 00:13:30.056342   12824 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 00:13:30.056342   12824 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 00:13:30.056342   12824 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 00:13:30.057011   12824 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 00:13:30.057011   12824 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 00:13:30.086984   12824 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:13:30.087803   12824 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:13:30.088106   12824 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:13:30.088106   12824 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:13:30.088234   12824 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:13:30.088270   12824 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 00:13:30.272698   12824 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 00:13:30.272698   12824 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 00:13:37.778295   12824 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.505063 seconds
	I0308 00:13:37.778295   12824 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.505063 seconds
	I0308 00:13:37.778574   12824 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 00:13:37.778643   12824 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 00:13:37.807483   12824 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 00:13:37.807483   12824 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 00:13:38.352367   12824 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 00:13:38.352423   12824 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0308 00:13:38.352647   12824 kubeadm.go:309] [mark-control-plane] Marking the node multinode-397400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 00:13:38.352647   12824 command_runner.go:130] > [mark-control-plane] Marking the node multinode-397400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 00:13:38.870076   12824 kubeadm.go:309] [bootstrap-token] Using token: gt5n6k.xcofs1sfh83i5dih
	I0308 00:13:38.870076   12824 command_runner.go:130] > [bootstrap-token] Using token: gt5n6k.xcofs1sfh83i5dih
	I0308 00:13:38.875056   12824 out.go:204]   - Configuring RBAC rules ...
	I0308 00:13:38.875056   12824 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 00:13:38.875056   12824 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 00:13:38.882901   12824 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 00:13:38.882996   12824 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 00:13:38.896340   12824 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 00:13:38.896340   12824 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 00:13:38.902866   12824 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 00:13:38.902960   12824 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 00:13:38.913444   12824 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 00:13:38.913444   12824 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 00:13:38.919191   12824 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 00:13:38.919191   12824 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 00:13:38.944474   12824 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 00:13:38.944474   12824 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 00:13:39.308919   12824 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 00:13:39.308919   12824 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0308 00:13:39.365867   12824 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 00:13:39.365923   12824 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0308 00:13:39.369044   12824 kubeadm.go:309] 
	I0308 00:13:39.369209   12824 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0308 00:13:39.369209   12824 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 00:13:39.369276   12824 kubeadm.go:309] 
	I0308 00:13:39.369513   12824 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0308 00:13:39.369513   12824 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 00:13:39.369578   12824 kubeadm.go:309] 
	I0308 00:13:39.369633   12824 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 00:13:39.369633   12824 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0308 00:13:39.369806   12824 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 00:13:39.369806   12824 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 00:13:39.369960   12824 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 00:13:39.369960   12824 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 00:13:39.369960   12824 kubeadm.go:309] 
	I0308 00:13:39.369960   12824 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 00:13:39.369960   12824 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0308 00:13:39.369960   12824 kubeadm.go:309] 
	I0308 00:13:39.369960   12824 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 00:13:39.369960   12824 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 00:13:39.369960   12824 kubeadm.go:309] 
	I0308 00:13:39.370507   12824 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 00:13:39.370507   12824 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0308 00:13:39.370750   12824 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 00:13:39.370750   12824 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 00:13:39.370930   12824 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 00:13:39.370930   12824 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 00:13:39.371001   12824 kubeadm.go:309] 
	I0308 00:13:39.371170   12824 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0308 00:13:39.371170   12824 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 00:13:39.371239   12824 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 00:13:39.371239   12824 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0308 00:13:39.371239   12824 kubeadm.go:309] 
	I0308 00:13:39.371239   12824 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token gt5n6k.xcofs1sfh83i5dih \
	I0308 00:13:39.371239   12824 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gt5n6k.xcofs1sfh83i5dih \
	I0308 00:13:39.371787   12824 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 00:13:39.371787   12824 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 00:13:39.371884   12824 command_runner.go:130] > 	--control-plane 
	I0308 00:13:39.371884   12824 kubeadm.go:309] 	--control-plane 
	I0308 00:13:39.371884   12824 kubeadm.go:309] 
	I0308 00:13:39.371884   12824 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 00:13:39.371884   12824 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0308 00:13:39.371884   12824 kubeadm.go:309] 
	I0308 00:13:39.371884   12824 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gt5n6k.xcofs1sfh83i5dih \
	I0308 00:13:39.371884   12824 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gt5n6k.xcofs1sfh83i5dih \
	I0308 00:13:39.372537   12824 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:13:39.372537   12824 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:13:39.372775   12824 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:13:39.372775   12824 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:13:39.372838   12824 cni.go:84] Creating CNI manager for ""
	I0308 00:13:39.372838   12824 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0308 00:13:39.377169   12824 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 00:13:39.391831   12824 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 00:13:39.399810   12824 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0308 00:13:39.399810   12824 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0308 00:13:39.399810   12824 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0308 00:13:39.399810   12824 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 00:13:39.399810   12824 command_runner.go:130] > Access: 2024-03-08 00:11:53.371401300 +0000
	I0308 00:13:39.399810   12824 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0308 00:13:39.399810   12824 command_runner.go:130] > Change: 2024-03-08 00:11:45.780000000 +0000
	I0308 00:13:39.399810   12824 command_runner.go:130] >  Birth: -
	I0308 00:13:39.399810   12824 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 00:13:39.399810   12824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 00:13:39.459022   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 00:13:40.914322   12824 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0308 00:13:40.914322   12824 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0308 00:13:40.914322   12824 command_runner.go:130] > serviceaccount/kindnet created
	I0308 00:13:40.914430   12824 command_runner.go:130] > daemonset.apps/kindnet created
	I0308 00:13:40.914453   12824 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4554178s)
	I0308 00:13:40.914538   12824 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 00:13:40.929284   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400 minikube.k8s.io/updated_at=2024_03_08T00_13_40_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=true
	I0308 00:13:40.932293   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:40.959929   12824 command_runner.go:130] > -16
	I0308 00:13:40.960087   12824 ops.go:34] apiserver oom_adj: -16
	I0308 00:13:41.139891   12824 command_runner.go:130] > node/multinode-397400 labeled
	I0308 00:13:41.140253   12824 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0308 00:13:41.151344   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:41.264377   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:41.665995   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:41.768128   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:42.153939   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:42.252749   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:42.657896   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:42.758972   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:43.161452   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:43.259967   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:43.662337   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:43.795859   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:44.162825   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:44.275949   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:44.664863   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:44.783521   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:45.155663   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:45.262448   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:45.659535   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:45.771973   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:46.163139   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:46.271592   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:46.665021   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:46.785301   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:47.151959   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:47.253275   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:47.654331   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:47.777167   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:48.157123   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:48.262327   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:48.661641   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:48.785365   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:49.165786   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:49.274421   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:49.666568   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:49.787001   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:50.155508   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:50.271389   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:50.657765   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:50.792404   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:51.162237   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:51.334246   12824 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0308 00:13:51.651595   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 00:13:51.773078   12824 command_runner.go:130] > NAME      SECRETS   AGE
	I0308 00:13:51.773130   12824 command_runner.go:130] > default   0         0s
	I0308 00:13:51.773130   12824 kubeadm.go:1106] duration metric: took 10.8584488s to wait for elevateKubeSystemPrivileges
	W0308 00:13:51.773130   12824 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 00:13:51.773130   12824 kubeadm.go:393] duration metric: took 25.2790538s to StartCluster
	I0308 00:13:51.773130   12824 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:51.773130   12824 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:13:51.774717   12824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:13:51.775712   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 00:13:51.775712   12824 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 00:13:51.775712   12824 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 00:13:51.779719   12824 out.go:177] * Verifying Kubernetes components...
	I0308 00:13:51.775712   12824 addons.go:69] Setting storage-provisioner=true in profile "multinode-397400"
	I0308 00:13:51.775712   12824 addons.go:69] Setting default-storageclass=true in profile "multinode-397400"
	I0308 00:13:51.776748   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:13:51.780729   12824 addons.go:234] Setting addon storage-provisioner=true in "multinode-397400"
	I0308 00:13:51.780729   12824 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-397400"
	I0308 00:13:51.780729   12824 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:13:51.783737   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:51.783737   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:51.795719   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:13:52.178407   12824 command_runner.go:130] > apiVersion: v1
	I0308 00:13:52.179401   12824 command_runner.go:130] > data:
	I0308 00:13:52.179401   12824 command_runner.go:130] >   Corefile: |
	I0308 00:13:52.179401   12824 command_runner.go:130] >     .:53 {
	I0308 00:13:52.179401   12824 command_runner.go:130] >         errors
	I0308 00:13:52.179401   12824 command_runner.go:130] >         health {
	I0308 00:13:52.179401   12824 command_runner.go:130] >            lameduck 5s
	I0308 00:13:52.179401   12824 command_runner.go:130] >         }
	I0308 00:13:52.179401   12824 command_runner.go:130] >         ready
	I0308 00:13:52.179401   12824 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0308 00:13:52.179401   12824 command_runner.go:130] >            pods insecure
	I0308 00:13:52.179401   12824 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0308 00:13:52.179401   12824 command_runner.go:130] >            ttl 30
	I0308 00:13:52.179401   12824 command_runner.go:130] >         }
	I0308 00:13:52.179401   12824 command_runner.go:130] >         prometheus :9153
	I0308 00:13:52.179401   12824 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0308 00:13:52.179401   12824 command_runner.go:130] >            max_concurrent 1000
	I0308 00:13:52.179401   12824 command_runner.go:130] >         }
	I0308 00:13:52.179401   12824 command_runner.go:130] >         cache 30
	I0308 00:13:52.179401   12824 command_runner.go:130] >         loop
	I0308 00:13:52.179401   12824 command_runner.go:130] >         reload
	I0308 00:13:52.179401   12824 command_runner.go:130] >         loadbalance
	I0308 00:13:52.179401   12824 command_runner.go:130] >     }
	I0308 00:13:52.179401   12824 command_runner.go:130] > kind: ConfigMap
	I0308 00:13:52.179401   12824 command_runner.go:130] > metadata:
	I0308 00:13:52.179401   12824 command_runner.go:130] >   creationTimestamp: "2024-03-08T00:13:39Z"
	I0308 00:13:52.179401   12824 command_runner.go:130] >   name: coredns
	I0308 00:13:52.179401   12824 command_runner.go:130] >   namespace: kube-system
	I0308 00:13:52.179401   12824 command_runner.go:130] >   resourceVersion: "266"
	I0308 00:13:52.179401   12824 command_runner.go:130] >   uid: 91156448-3a98-4e46-a520-43a2de776a54
	I0308 00:13:52.180406   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 00:13:52.238133   12824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:13:52.748303   12824 command_runner.go:130] > configmap/coredns replaced
	I0308 00:13:52.748303   12824 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 00:13:52.750358   12824 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:13:52.750542   12824 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:13:52.751457   12824 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.48.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:13:52.751457   12824 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.48.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:13:52.752543   12824 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 00:13:52.753151   12824 node_ready.go:35] waiting up to 6m0s for node "multinode-397400" to be "Ready" ...
	I0308 00:13:52.753151   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:52.753151   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:52.753151   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:52.753151   12824 round_trippers.go:463] GET https://172.20.48.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0308 00:13:52.753151   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:52.753151   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:52.753151   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:52.753151   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:52.770534   12824 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0308 00:13:52.770895   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:52.770995   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:52.770995   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:52.771096   12824 round_trippers.go:580]     Content-Length: 291
	I0308 00:13:52.771096   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:52 GMT
	I0308 00:13:52.771096   12824 round_trippers.go:580]     Audit-Id: cc05d692-ecf1-4bf9-9146-6aa92bb6b5e6
	I0308 00:13:52.771096   12824 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0308 00:13:52.771313   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:52.771313   12824 round_trippers.go:580]     Audit-Id: be614532-1bb7-44c9-b557-c937e85b9243
	I0308 00:13:52.771395   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:52.771395   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:52.771395   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:52.771395   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:52.771448   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:52 GMT
	I0308 00:13:52.771313   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:52.771830   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:52.771830   12824 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a6e2a295-f295-4ca3-974f-ef90f132b15c","resourceVersion":"385","creationTimestamp":"2024-03-08T00:13:39Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0308 00:13:52.771830   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:52.772729   12824 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a6e2a295-f295-4ca3-974f-ef90f132b15c","resourceVersion":"385","creationTimestamp":"2024-03-08T00:13:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0308 00:13:52.772729   12824 round_trippers.go:463] PUT https://172.20.48.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0308 00:13:52.772729   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:52.772729   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:52.772729   12824 round_trippers.go:473]     Content-Type: application/json
	I0308 00:13:52.772729   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:52.786513   12824 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0308 00:13:52.786513   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:52.787528   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:52 GMT
	I0308 00:13:52.787528   12824 round_trippers.go:580]     Audit-Id: 4bd7ba40-371b-4a9b-954f-4eab6d0fab45
	I0308 00:13:52.787566   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:52.787566   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:52.787566   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:52.787566   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:52.787566   12824 round_trippers.go:580]     Content-Length: 291
	I0308 00:13:52.787566   12824 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a6e2a295-f295-4ca3-974f-ef90f132b15c","resourceVersion":"387","creationTimestamp":"2024-03-08T00:13:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0308 00:13:53.265906   12824 round_trippers.go:463] GET https://172.20.48.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0308 00:13:53.266184   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:53.266184   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:53.266184   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:53.265906   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:53.266295   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:53.266295   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:53.266381   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:53.270668   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:13:53.270668   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:13:53.270668   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:53.270739   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:53.270739   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:53.270739   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:53.270816   12824 round_trippers.go:580]     Content-Length: 291
	I0308 00:13:53.270816   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:53 GMT
	I0308 00:13:53.270739   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:53 GMT
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Audit-Id: f9d64060-c5e5-458b-9eaf-92e2a6be6cf0
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Audit-Id: 124ce203-5e82-421c-90f9-72566eec32ee
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:53.270845   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:53.270845   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:53.270845   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:53.270845   12824 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a6e2a295-f295-4ca3-974f-ef90f132b15c","resourceVersion":"397","creationTimestamp":"2024-03-08T00:13:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0308 00:13:53.271099   12824 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-397400" context rescaled to 1 replicas
	I0308 00:13:53.271099   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:53.759772   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:53.759772   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:53.759772   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:53.759772   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:53.764058   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:13:53.764058   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:53.764058   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:53.764058   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:53 GMT
	I0308 00:13:53.764058   12824 round_trippers.go:580]     Audit-Id: d6b95214-f27e-4c2a-b215-016bab74b29e
	I0308 00:13:53.764058   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:53.764058   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:53.764058   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:53.764058   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:54.007729   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:54.008719   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:54.011725   12824 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:13:54.011725   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:54.017724   12824 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 00:13:54.017724   12824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 00:13:54.017724   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:54.017724   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:54.018762   12824 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:13:54.018762   12824 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.48.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:13:54.019746   12824 addons.go:234] Setting addon default-storageclass=true in "multinode-397400"
	I0308 00:13:54.019746   12824 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:13:54.020741   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:54.268384   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:54.268474   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:54.268474   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:54.268474   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:54.272008   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:13:54.272008   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:54.272008   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:54.272008   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:54.272008   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:54.272008   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:54 GMT
	I0308 00:13:54.272008   12824 round_trippers.go:580]     Audit-Id: 30df3971-517e-4801-9f04-20feb693524b
	I0308 00:13:54.272008   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:54.272008   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:54.761057   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:54.761378   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:54.761378   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:54.761378   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:54.765700   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:13:54.766248   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:54.766248   12824 round_trippers.go:580]     Audit-Id: e8b6cb5c-eaa9-4e0d-a1e3-3a5a52702288
	I0308 00:13:54.766248   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:54.766248   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:54.766248   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:54.766248   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:54.766248   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:54 GMT
	I0308 00:13:54.766805   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:54.767194   12824 node_ready.go:53] node "multinode-397400" has status "Ready":"False"
	I0308 00:13:55.254121   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:55.254121   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:55.254121   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:55.254121   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:55.264327   12824 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0308 00:13:55.264327   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:55.265349   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:55 GMT
	I0308 00:13:55.265349   12824 round_trippers.go:580]     Audit-Id: 69ed78e9-84e3-4450-b02b-13dae154ebaf
	I0308 00:13:55.265403   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:55.265403   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:55.265459   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:55.265459   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:55.265584   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:55.765325   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:55.765325   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:55.765325   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:55.765325   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:55.769823   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:13:55.769823   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:55.769910   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:55 GMT
	I0308 00:13:55.769910   12824 round_trippers.go:580]     Audit-Id: e09dd4fd-fd45-4da0-b34a-a61563d6ac96
	I0308 00:13:55.769910   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:55.769910   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:55.769977   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:55.769977   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:55.770379   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:56.207674   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:56.207674   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:56.207674   12824 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 00:13:56.207674   12824 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 00:13:56.207674   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:13:56.258737   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:56.258737   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:56.259049   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:56.259049   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:56.262532   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:13:56.262933   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:56.262933   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:56.262933   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:56.262933   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:56.262933   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:56 GMT
	I0308 00:13:56.262933   12824 round_trippers.go:580]     Audit-Id: 4d660d58-e145-4e76-a00d-59b26e8a5510
	I0308 00:13:56.262933   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:56.263287   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:56.334989   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:56.335469   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:56.335469   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:13:56.764994   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:56.764994   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:56.764994   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:56.764994   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:56.769996   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:13:56.770724   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:56.770724   12824 round_trippers.go:580]     Audit-Id: a3a2fce9-7bd7-4f0e-bda4-7f1b9978b9e7
	I0308 00:13:56.770724   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:56.770724   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:56.770724   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:56.770724   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:56.770724   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:56 GMT
	I0308 00:13:56.771279   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:56.771999   12824 node_ready.go:53] node "multinode-397400" has status "Ready":"False"
	I0308 00:13:57.258248   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:57.258248   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:57.258449   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:57.258449   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:57.264005   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:13:57.264005   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:57.264005   12824 round_trippers.go:580]     Audit-Id: e972197e-7005-4658-b274-315abad1c3f8
	I0308 00:13:57.264005   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:57.264005   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:57.264005   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:57.264005   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:57.264005   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:57 GMT
	I0308 00:13:57.264005   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:57.768586   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:57.768586   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:57.768586   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:57.768586   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:57.772582   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:13:57.772582   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:57.772950   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:57.772950   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:57.772950   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:57.772950   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:57.772950   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:57 GMT
	I0308 00:13:57.772950   12824 round_trippers.go:580]     Audit-Id: 7ea0815e-717b-42d0-a369-1a809810eb40
	I0308 00:13:57.773223   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:58.243642   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:13:58.244416   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:58.244416   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:13:58.260832   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:58.260832   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:58.260832   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:58.261037   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:58.265159   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:13:58.265159   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:58.265228   12824 round_trippers.go:580]     Audit-Id: c6db6cf5-fc60-489d-8e20-ca7994d0a202
	I0308 00:13:58.265228   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:58.265228   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:58.265228   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:58.265228   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:58.265228   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:58 GMT
	I0308 00:13:58.265464   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:58.768071   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:58.768071   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:58.768071   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:58.768071   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:58.771093   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:13:58.771093   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:58.771093   12824 round_trippers.go:580]     Audit-Id: c481a65b-3514-44fb-85cf-107610807d36
	I0308 00:13:58.771093   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:58.771093   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:58.771093   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:58.771093   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:58.771093   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:58 GMT
	I0308 00:13:58.772117   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:58.772117   12824 node_ready.go:53] node "multinode-397400" has status "Ready":"False"
	I0308 00:13:58.827063   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:13:58.827063   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:13:58.828364   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:13:58.982582   12824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 00:13:59.260283   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:59.260283   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:59.260283   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:59.260283   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:59.263110   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:13:59.263110   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:59.263110   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:59.263110   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:59.263110   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:59 GMT
	I0308 00:13:59.263110   12824 round_trippers.go:580]     Audit-Id: 520fd18b-28e5-48ba-a4bf-de445fbb773f
	I0308 00:13:59.263110   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:59.263110   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:59.263110   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:59.753423   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:13:59.753423   12824 round_trippers.go:469] Request Headers:
	I0308 00:13:59.753423   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:13:59.753423   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:13:59.784485   12824 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0308 00:13:59.784485   12824 round_trippers.go:577] Response Headers:
	I0308 00:13:59.784485   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:13:59.785319   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:13:59 GMT
	I0308 00:13:59.785319   12824 round_trippers.go:580]     Audit-Id: 340d3b17-d4c1-4671-bf24-5c0953d2ac8e
	I0308 00:13:59.785319   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:13:59.785319   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:13:59.785319   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:13:59.785461   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:13:59.801614   12824 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0308 00:13:59.801614   12824 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0308 00:13:59.801614   12824 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0308 00:13:59.801614   12824 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0308 00:13:59.801614   12824 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0308 00:13:59.801737   12824 command_runner.go:130] > pod/storage-provisioner created
	I0308 00:14:00.260176   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:00.260382   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:00.260382   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:00.260382   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:00.264852   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:00.265398   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:00.265398   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:00.265398   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:00.265398   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:00 GMT
	I0308 00:14:00.265464   12824 round_trippers.go:580]     Audit-Id: 169e6eb3-9f7e-4ca5-80f4-ba081fe57cba
	I0308 00:14:00.265464   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:00.265489   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:00.265586   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:00.701399   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:14:00.701399   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:00.701998   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:14:00.761221   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:00.761221   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:00.761221   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:00.761221   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:00.764414   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:00.764414   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:00.764414   12824 round_trippers.go:580]     Audit-Id: a1ec3b54-23ea-468b-86f8-5a652ac19040
	I0308 00:14:00.764875   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:00.764875   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:00.764875   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:00.764875   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:00.764920   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:00 GMT
	I0308 00:14:00.765323   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:00.833412   12824 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 00:14:01.092689   12824 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0308 00:14:01.092881   12824 round_trippers.go:463] GET https://172.20.48.212:8443/apis/storage.k8s.io/v1/storageclasses
	I0308 00:14:01.092949   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:01.092949   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:01.092949   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:01.095299   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:01.095299   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:01.095299   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:01.095299   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:01.095299   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:01.095299   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:01.095299   12824 round_trippers.go:580]     Content-Length: 1273
	I0308 00:14:01.095299   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:01 GMT
	I0308 00:14:01.095909   12824 round_trippers.go:580]     Audit-Id: 16fa6cf8-cd95-4f2c-a770-35898ad508a2
	I0308 00:14:01.095949   12824 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"standard","uid":"9332bef2-7ca3-464f-952b-f830115e1906","resourceVersion":"419","creationTimestamp":"2024-03-08T00:14:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-08T00:14:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0308 00:14:01.096656   12824 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9332bef2-7ca3-464f-952b-f830115e1906","resourceVersion":"419","creationTimestamp":"2024-03-08T00:14:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-08T00:14:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0308 00:14:01.096724   12824 round_trippers.go:463] PUT https://172.20.48.212:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0308 00:14:01.096816   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:01.096816   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:01.096816   12824 round_trippers.go:473]     Content-Type: application/json
	I0308 00:14:01.096889   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:01.107980   12824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0308 00:14:01.107980   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:01.107980   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:01.107980   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:01.107980   12824 round_trippers.go:580]     Content-Length: 1220
	I0308 00:14:01.107980   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:01 GMT
	I0308 00:14:01.107980   12824 round_trippers.go:580]     Audit-Id: 2a0cbcf5-c7a3-4737-9993-9816df1c3109
	I0308 00:14:01.107980   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:01.109004   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:01.109004   12824 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9332bef2-7ca3-464f-952b-f830115e1906","resourceVersion":"419","creationTimestamp":"2024-03-08T00:14:01Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-08T00:14:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0308 00:14:01.112332   12824 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 00:14:01.115838   12824 addons.go:505] duration metric: took 9.3400384s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 00:14:01.266405   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:01.266405   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:01.266405   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:01.266405   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:01.272338   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:14:01.272338   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:01.272847   12824 round_trippers.go:580]     Audit-Id: a0f67e11-eb15-427e-a69c-9efedb1d919a
	I0308 00:14:01.272847   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:01.272847   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:01.272847   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:01.272847   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:01.272847   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:01 GMT
	I0308 00:14:01.273083   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:01.273557   12824 node_ready.go:53] node "multinode-397400" has status "Ready":"False"
	I0308 00:14:01.768369   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:01.768669   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:01.768669   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:01.768669   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:01.773054   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:01.773125   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:01.773125   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:01.773125   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:01.773125   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:01 GMT
	I0308 00:14:01.773125   12824 round_trippers.go:580]     Audit-Id: ad0a3ee0-bb0e-4dd8-81b0-ca988c88807e
	I0308 00:14:01.773125   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:01.773125   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:01.773202   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:02.264984   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:02.264984   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:02.264984   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:02.264984   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:02.269047   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:02.269933   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:02.269933   12824 round_trippers.go:580]     Audit-Id: e3c6a93e-2c04-43e3-96d0-bc60ea1ca962
	I0308 00:14:02.269933   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:02.269933   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:02.269933   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:02.269933   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:02.269933   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:02 GMT
	I0308 00:14:02.270161   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:02.768102   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:02.768102   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:02.768102   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:02.768102   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:02.771457   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:02.772105   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:02.772105   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:02.772105   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:02 GMT
	I0308 00:14:02.772105   12824 round_trippers.go:580]     Audit-Id: 5e66b862-22bb-4ddb-ae10-e351885e3d97
	I0308 00:14:02.772105   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:02.772105   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:02.772105   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:02.772288   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:03.267159   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:03.267371   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:03.267371   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:03.267371   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:03.271730   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:03.271730   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:03.271820   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:03.271820   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:03.271820   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:03 GMT
	I0308 00:14:03.271820   12824 round_trippers.go:580]     Audit-Id: 23624eab-ba8d-4fdf-b73e-5d1516220267
	I0308 00:14:03.271820   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:03.271820   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:03.272809   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"365","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0308 00:14:03.754214   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:03.754270   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:03.754270   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:03.754270   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:03.759888   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:14:03.759888   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:03.759888   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:03.759888   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:03 GMT
	I0308 00:14:03.760146   12824 round_trippers.go:580]     Audit-Id: cb1c92e9-0c8f-40ef-815c-3ea66a99d4f7
	I0308 00:14:03.760146   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:03.760146   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:03.760146   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:03.764469   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:03.764572   12824 node_ready.go:49] node "multinode-397400" has status "Ready":"True"
	I0308 00:14:03.764572   12824 node_ready.go:38] duration metric: took 11.0113169s for node "multinode-397400" to be "Ready" ...
	I0308 00:14:03.764572   12824 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:14:03.764572   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:14:03.765132   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:03.765132   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:03.765132   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:03.773888   12824 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0308 00:14:03.773888   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:03.773888   12824 round_trippers.go:580]     Audit-Id: ec26c697-4565-4ae1-bc15-d50cfdb87955
	I0308 00:14:03.773888   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:03.773888   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:03.773888   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:03.773888   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:03.773888   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:03 GMT
	I0308 00:14:03.774912   12824 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"428","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52403 chars]
	I0308 00:14:03.778895   12824 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:03.779910   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:03.779910   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:03.779910   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:03.779910   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:03.783903   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:03.783903   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:03.783903   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:03.783903   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:03.784568   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:03.784568   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:03.784568   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:03 GMT
	I0308 00:14:03.784568   12824 round_trippers.go:580]     Audit-Id: 731fce14-2812-481f-95ed-26dc0112f68b
	I0308 00:14:03.784683   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"428","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0308 00:14:03.785358   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:03.785423   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:03.785423   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:03.785423   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:03.788052   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:03.788837   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:03.788837   12824 round_trippers.go:580]     Audit-Id: 4b442478-d210-462b-89e9-84abb92ef9c6
	I0308 00:14:03.788872   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:03.788872   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:03.788872   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:03.788872   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:03.788872   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:03 GMT
	I0308 00:14:03.788872   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:04.289836   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:04.289836   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:04.289836   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:04.289836   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:04.293450   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:04.293450   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:04.293450   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:04.293450   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:04 GMT
	I0308 00:14:04.293450   12824 round_trippers.go:580]     Audit-Id: cdbea0fa-6551-4fb5-b91c-c67f33fcb483
	I0308 00:14:04.294291   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:04.294291   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:04.294291   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:04.294584   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"428","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0308 00:14:04.295062   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:04.295062   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:04.295062   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:04.295062   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:04.297637   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:04.297637   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:04.297637   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:04.297637   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:04.297637   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:04.297637   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:04 GMT
	I0308 00:14:04.297637   12824 round_trippers.go:580]     Audit-Id: c236454e-7745-4ce9-b3f0-a8363b52831f
	I0308 00:14:04.297637   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:04.298649   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:04.784018   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:04.784107   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:04.784107   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:04.784107   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:04.787319   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:04.787946   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:04.787946   12824 round_trippers.go:580]     Audit-Id: ebff2402-54df-4016-b1cf-3b04e119697f
	I0308 00:14:04.787946   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:04.787946   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:04.788022   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:04.788022   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:04.788022   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:04 GMT
	I0308 00:14:04.788022   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"428","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0308 00:14:04.788756   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:04.788806   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:04.788806   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:04.788806   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:04.791359   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:04.791359   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:04.791359   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:04.791359   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:04 GMT
	I0308 00:14:04.791359   12824 round_trippers.go:580]     Audit-Id: 086f08da-9a3d-4d06-9b92-9f83115b18d1
	I0308 00:14:04.791359   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:04.791359   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:04.791641   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:04.792372   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:05.288256   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:05.288256   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:05.288256   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:05.288256   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:05.292879   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:05.292879   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:05.292879   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:05.292879   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:05.292879   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:05.293232   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:05 GMT
	I0308 00:14:05.293232   12824 round_trippers.go:580]     Audit-Id: d0b69f9d-1b9b-4722-8a56-fd4807159b8d
	I0308 00:14:05.293232   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:05.293999   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"440","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6512 chars]
	I0308 00:14:05.294738   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:05.294738   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:05.294738   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:05.294738   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:05.298886   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:05.298886   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:05.298886   12824 round_trippers.go:580]     Audit-Id: 3978c99c-5395-4c49-bdba-ef868ff0615e
	I0308 00:14:05.298886   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:05.298886   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:05.298886   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:05.298886   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:05.298886   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:05 GMT
	I0308 00:14:05.299808   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:05.787228   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:05.787314   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:05.787314   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:05.787314   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:05.791387   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:05.791387   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:05.791387   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:05.791387   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:05.791387   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:05.791387   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:05 GMT
	I0308 00:14:05.791387   12824 round_trippers.go:580]     Audit-Id: 77e87ac1-6a5d-42f1-bd9e-a6fc1c361718
	I0308 00:14:05.791387   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:05.792453   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"440","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6512 chars]
	I0308 00:14:05.793190   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:05.793806   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:05.793806   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:05.793806   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:05.799997   12824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:14:05.799997   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:05.799997   12824 round_trippers.go:580]     Audit-Id: e5ded618-86be-4728-aa6b-9eb50fa98ad4
	I0308 00:14:05.799997   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:05.799997   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:05.799997   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:05.799997   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:05.799997   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:05 GMT
	I0308 00:14:05.799997   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:05.800712   12824 pod_ready.go:102] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"False"
	I0308 00:14:06.290083   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:14:06.290083   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.290083   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.290083   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.293704   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.293704   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.293704   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.293704   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.294414   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.294414   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.294414   12824 round_trippers.go:580]     Audit-Id: e85c011d-448c-4165-b216-eb7184cced6b
	I0308 00:14:06.294414   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.294593   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"444","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0308 00:14:06.295069   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.295069   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.295069   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.295069   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.297676   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.297676   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.297676   12824 round_trippers.go:580]     Audit-Id: d8c73b08-77c9-4e3f-845e-508623bcc1c6
	I0308 00:14:06.297676   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.297676   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.297676   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.297676   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.297676   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.298936   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.298936   12824 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.298936   12824 pod_ready.go:81] duration metric: took 2.5200179s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.298936   12824 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.299481   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:14:06.299481   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.299481   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.299481   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.302898   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.302898   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.302898   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.302898   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.302898   12824 round_trippers.go:580]     Audit-Id: 92d09f47-50e1-4725-8f68-b95c3e355845
	I0308 00:14:06.302898   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.302898   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.303502   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.303668   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"e576042a-07ca-47b1-b815-88318bfc734e","resourceVersion":"322","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.48.212:2379","kubernetes.io/config.hash":"fc65775229edb6b7e62a37e01d988ef3","kubernetes.io/config.mirror":"fc65775229edb6b7e62a37e01d988ef3","kubernetes.io/config.seen":"2024-03-08T00:13:39.441051880Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0308 00:14:06.304588   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.304650   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.304705   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.304705   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.308551   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.308551   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.308551   12824 round_trippers.go:580]     Audit-Id: cd55793f-48bc-4bf2-b2a0-929bb6968c35
	I0308 00:14:06.308635   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.308635   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.308635   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.308659   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.308659   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.308897   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.309617   12824 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.309617   12824 pod_ready.go:81] duration metric: took 10.6804ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.309617   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.309752   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:14:06.309752   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.309824   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.309824   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.313368   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.313368   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.313368   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.313368   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.313368   12824 round_trippers.go:580]     Audit-Id: fbc39ad3-7b46-464a-b7f1-09d97f8f6427
	I0308 00:14:06.313368   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.313368   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.313368   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.313368   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"084257fc-8f2b-4540-8b93-3d11bed62c3b","resourceVersion":"317","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.48.212:8443","kubernetes.io/config.hash":"e54af4aacb740938efeadd3de88c5b29","kubernetes.io/config.mirror":"e54af4aacb740938efeadd3de88c5b29","kubernetes.io/config.seen":"2024-03-08T00:13:39.441056480Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0308 00:14:06.314290   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.314290   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.314290   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.314290   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.317284   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.317284   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.317284   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.317284   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.317284   12824 round_trippers.go:580]     Audit-Id: d5511130-3966-46b5-ae9c-17c27fa00a06
	I0308 00:14:06.317284   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.317284   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.317284   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.317284   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.317284   12824 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.317284   12824 pod_ready.go:81] duration metric: took 7.5853ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.317284   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.318290   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:14:06.318290   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.318290   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.318290   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.320598   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.320598   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.321625   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.321625   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.321625   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.321625   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.321625   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.321625   12824 round_trippers.go:580]     Audit-Id: 5bead8aa-6aef-44a7-a0b5-5f64d699c4bc
	I0308 00:14:06.322010   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"316","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0308 00:14:06.322656   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.322656   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.322750   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.322750   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.324996   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.325308   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.325308   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.325308   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.325308   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.325308   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.325308   12824 round_trippers.go:580]     Audit-Id: bd62fd7c-b517-4bb3-a64f-119af1385cbc
	I0308 00:14:06.325308   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.325942   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.326532   12824 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.326602   12824 pod_ready.go:81] duration metric: took 9.3177ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.326602   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.326767   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:14:06.326767   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.326767   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.326856   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.329083   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.330076   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.330076   12824 round_trippers.go:580]     Audit-Id: 324ef729-dd1f-4298-ba33-d3d6dc0c4aa2
	I0308 00:14:06.330076   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.330076   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.330076   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.330076   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.330076   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.330076   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"403","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0308 00:14:06.330076   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.330076   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.330076   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.330076   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.333908   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.333908   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.333908   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.333908   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.333908   12824 round_trippers.go:580]     Audit-Id: 1d7cbfe1-9edb-4e05-b8ab-f956cc55349a
	I0308 00:14:06.333908   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.333908   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.333908   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.334340   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.334340   12824 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.334340   12824 pod_ready.go:81] duration metric: took 7.7383ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.334340   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.492356   12824 request.go:629] Waited for 157.339ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:14:06.492586   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:14:06.492660   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.492660   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.492660   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.495969   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:06.496036   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.496036   12824 round_trippers.go:580]     Audit-Id: 17dbfc65-82bc-4868-8031-ac5f40e99618
	I0308 00:14:06.496036   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.496036   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.496036   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.496036   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.496036   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.496107   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"313","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0308 00:14:06.694729   12824 request.go:629] Waited for 198.6198ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.695264   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:14:06.695294   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.695294   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.695294   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.698533   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:14:06.698533   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.698533   12824 round_trippers.go:580]     Audit-Id: 2d1ec0ca-3a49-4dae-8747-22eac4f96fe2
	I0308 00:14:06.698533   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.698533   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.698533   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.699232   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.699232   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.699427   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0308 00:14:06.699837   12824 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:14:06.699837   12824 pod_ready.go:81] duration metric: took 364.8187ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:14:06.699837   12824 pod_ready.go:38] duration metric: took 2.9352383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:14:06.699965   12824 api_server.go:52] waiting for apiserver process to appear ...
	I0308 00:14:06.710239   12824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:14:06.734152   12824 command_runner.go:130] > 2157
	I0308 00:14:06.735054   12824 api_server.go:72] duration metric: took 14.9592016s to wait for apiserver process to appear ...
	I0308 00:14:06.735054   12824 api_server.go:88] waiting for apiserver healthz status ...
	I0308 00:14:06.735054   12824 api_server.go:253] Checking apiserver healthz at https://172.20.48.212:8443/healthz ...
	I0308 00:14:06.741703   12824 api_server.go:279] https://172.20.48.212:8443/healthz returned 200:
	ok
	I0308 00:14:06.742291   12824 round_trippers.go:463] GET https://172.20.48.212:8443/version
	I0308 00:14:06.742319   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.742319   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.742319   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.743966   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:14:06.743966   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.743966   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.743966   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.743966   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.743966   12824 round_trippers.go:580]     Content-Length: 264
	I0308 00:14:06.744386   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.744386   12824 round_trippers.go:580]     Audit-Id: 695c8911-c2f9-45f4-b2e0-7819f5d30227
	I0308 00:14:06.744429   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.744465   12824 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0308 00:14:06.744528   12824 api_server.go:141] control plane version: v1.28.4
	I0308 00:14:06.744599   12824 api_server.go:131] duration metric: took 9.5448ms to wait for apiserver health ...
	I0308 00:14:06.744687   12824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 00:14:06.897923   12824 request.go:629] Waited for 153.2349ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:14:06.898084   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:14:06.898306   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:06.898306   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:06.898306   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:06.902571   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:06.902571   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:06.902571   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:06.902571   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:06.902571   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:06 GMT
	I0308 00:14:06.902571   12824 round_trippers.go:580]     Audit-Id: fbc160c1-10ca-4534-966c-f27cb132b5cf
	I0308 00:14:06.902571   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:06.902571   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:06.905225   12824 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"444","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0308 00:14:06.907856   12824 system_pods.go:59] 8 kube-system pods found
	I0308 00:14:06.907908   12824 system_pods.go:61] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:14:06.907908   12824 system_pods.go:61] "etcd-multinode-397400" [e576042a-07ca-47b1-b815-88318bfc734e] Running
	I0308 00:14:06.907908   12824 system_pods.go:61] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:14:06.907908   12824 system_pods.go:61] "kube-apiserver-multinode-397400" [084257fc-8f2b-4540-8b93-3d11bed62c3b] Running
	I0308 00:14:06.907908   12824 system_pods.go:61] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:14:06.908000   12824 system_pods.go:61] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:14:06.908000   12824 system_pods.go:61] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:14:06.908000   12824 system_pods.go:61] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:14:06.908000   12824 system_pods.go:74] duration metric: took 163.3123ms to wait for pod list to return data ...
	I0308 00:14:06.908077   12824 default_sa.go:34] waiting for default service account to be created ...
	I0308 00:14:07.100647   12824 request.go:629] Waited for 192.4629ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:14:07.100830   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:14:07.100992   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:07.101043   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:07.101043   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:07.103434   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:14:07.103434   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:07.103434   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:07.103434   12824 round_trippers.go:580]     Content-Length: 261
	I0308 00:14:07.103434   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:07 GMT
	I0308 00:14:07.103434   12824 round_trippers.go:580]     Audit-Id: 25151adf-805a-4694-8317-7eb22db4b3c8
	I0308 00:14:07.103434   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:07.103434   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:07.103434   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:07.103931   12824 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"095cdd29-7997-44a2-8aa0-51adc17297b9","resourceVersion":"333","creationTimestamp":"2024-03-08T00:13:51Z"}}]}
	I0308 00:14:07.104396   12824 default_sa.go:45] found service account: "default"
	I0308 00:14:07.104396   12824 default_sa.go:55] duration metric: took 196.3181ms for default service account to be created ...
	I0308 00:14:07.104396   12824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 00:14:07.303015   12824 request.go:629] Waited for 198.2995ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:14:07.303277   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:14:07.303277   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:07.303277   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:07.303277   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:07.307905   12824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:14:07.308287   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:07.308287   12824 round_trippers.go:580]     Audit-Id: 81b8f942-04f3-4dfc-9469-c86f0b53ee0d
	I0308 00:14:07.308287   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:07.308287   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:07.308287   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:07.308287   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:07.308287   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:07 GMT
	I0308 00:14:07.309021   12824 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"444","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0308 00:14:07.311767   12824 system_pods.go:86] 8 kube-system pods found
	I0308 00:14:07.311767   12824 system_pods.go:89] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:14:07.311767   12824 system_pods.go:89] "etcd-multinode-397400" [e576042a-07ca-47b1-b815-88318bfc734e] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "kube-apiserver-multinode-397400" [084257fc-8f2b-4540-8b93-3d11bed62c3b] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:14:07.311832   12824 system_pods.go:89] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:14:07.311832   12824 system_pods.go:126] duration metric: took 207.4331ms to wait for k8s-apps to be running ...
	I0308 00:14:07.311901   12824 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:14:07.321435   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:14:07.347221   12824 system_svc.go:56] duration metric: took 35.3896ms WaitForService to wait for kubelet
	I0308 00:14:07.348163   12824 kubeadm.go:576] duration metric: took 15.5723051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:14:07.348163   12824 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:14:07.504764   12824 request.go:629] Waited for 156.5992ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes
	I0308 00:14:07.504764   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes
	I0308 00:14:07.504764   12824 round_trippers.go:469] Request Headers:
	I0308 00:14:07.504764   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:14:07.504764   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:14:07.510111   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:14:07.510111   12824 round_trippers.go:577] Response Headers:
	I0308 00:14:07.510111   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:14:07.510111   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:14:07.510111   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:14:07 GMT
	I0308 00:14:07.510111   12824 round_trippers.go:580]     Audit-Id: 1fdac4a4-a79a-4e4e-8979-9dc934a05f3e
	I0308 00:14:07.510111   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:14:07.510111   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:14:07.510808   12824 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"424","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0308 00:14:07.511568   12824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:14:07.512093   12824 node_conditions.go:123] node cpu capacity is 2
	I0308 00:14:07.512093   12824 node_conditions.go:105] duration metric: took 163.9279ms to run NodePressure ...
	I0308 00:14:07.512093   12824 start.go:240] waiting for startup goroutines ...
	I0308 00:14:07.512093   12824 start.go:245] waiting for cluster config update ...
	I0308 00:14:07.512213   12824 start.go:254] writing updated cluster config ...
	I0308 00:14:07.518129   12824 out.go:177] 
	I0308 00:14:07.521377   12824 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:14:07.524328   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:14:07.525450   12824 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:14:07.531121   12824 out.go:177] * Starting "multinode-397400-m02" worker node in "multinode-397400" cluster
	I0308 00:14:07.533093   12824 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:14:07.533093   12824 cache.go:56] Caching tarball of preloaded images
	I0308 00:14:07.533093   12824 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:14:07.533093   12824 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:14:07.534443   12824 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:14:07.537166   12824 start.go:360] acquireMachinesLock for multinode-397400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:14:07.538132   12824 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-397400-m02"
	I0308 00:14:07.538237   12824 start.go:93] Provisioning new machine with config: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:14:07.538237   12824 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0308 00:14:07.542619   12824 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0308 00:14:07.542619   12824 start.go:159] libmachine.API.Create for "multinode-397400" (driver="hyperv")
	I0308 00:14:07.542619   12824 client.go:168] LocalClient.Create starting
	I0308 00:14:07.542619   12824 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 00:14:07.543615   12824 main.go:141] libmachine: Decoding PEM data...
	I0308 00:14:07.543615   12824 main.go:141] libmachine: Parsing certificate...
	I0308 00:14:07.543615   12824 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 00:14:07.543615   12824 main.go:141] libmachine: Decoding PEM data...
	I0308 00:14:07.543615   12824 main.go:141] libmachine: Parsing certificate...
	I0308 00:14:07.543615   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 00:14:09.261072   12824 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 00:14:09.261072   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:09.261072   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 00:14:10.862307   12824 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 00:14:10.862514   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:10.862610   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 00:14:12.256913   12824 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 00:14:12.257134   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:12.257134   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 00:14:15.509014   12824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 00:14:15.509014   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:15.512058   12824 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 00:14:16.007841   12824 main.go:141] libmachine: Creating SSH key...
	I0308 00:14:16.208569   12824 main.go:141] libmachine: Creating VM...
	I0308 00:14:16.208569   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 00:14:18.892903   12824 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 00:14:18.893182   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:18.893182   12824 main.go:141] libmachine: Using switch "Default Switch"
	I0308 00:14:18.893348   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 00:14:20.532298   12824 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 00:14:20.532298   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:20.532685   12824 main.go:141] libmachine: Creating VHD
	I0308 00:14:20.532685   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 00:14:24.026719   12824 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 227EAAAA-DF31-4B1F-A02B-8F05A7B16323
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 00:14:24.027198   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:24.027198   12824 main.go:141] libmachine: Writing magic tar header
	I0308 00:14:24.027198   12824 main.go:141] libmachine: Writing SSH key tar header
	I0308 00:14:24.035037   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 00:14:27.048074   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:27.048849   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:27.048917   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\disk.vhd' -SizeBytes 20000MB
	I0308 00:14:29.376750   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:29.376985   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:29.376985   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-397400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0308 00:14:32.714642   12824 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-397400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 00:14:32.715230   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:32.715230   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-397400-m02 -DynamicMemoryEnabled $false
	I0308 00:14:34.766108   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:34.766896   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:34.766896   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-397400-m02 -Count 2
	I0308 00:14:36.778154   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:36.778154   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:36.778154   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-397400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\boot2docker.iso'
	I0308 00:14:39.149815   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:39.150197   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:39.150197   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-397400-m02 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\disk.vhd'
	I0308 00:14:41.604102   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:41.604216   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:41.604216   12824 main.go:141] libmachine: Starting VM...
	I0308 00:14:41.604216   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400-m02
	I0308 00:14:44.477874   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:44.477874   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:44.477874   12824 main.go:141] libmachine: Waiting for host to start...
	I0308 00:14:44.477874   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:14:46.587465   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:14:46.588043   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:46.588043   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:14:48.884033   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:48.884725   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:49.885391   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:14:51.991663   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:14:51.992514   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:51.992539   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:14:54.358258   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:54.358258   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:55.370722   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:14:57.385106   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:14:57.385380   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:14:57.385380   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:14:59.731255   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:14:59.731255   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:00.746384   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:02.816278   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:02.816536   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:02.816536   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:05.140146   12824 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:15:05.140146   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:06.142140   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:08.218154   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:08.218365   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:08.218365   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:10.588143   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:10.588734   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:10.588734   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:12.554269   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:12.554269   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:12.554269   12824 machine.go:94] provisionDockerMachine start ...
	I0308 00:15:12.555383   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:14.582395   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:14.583239   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:14.583239   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:16.922601   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:16.922601   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:16.928575   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:16.937869   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:16.937869   12824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:15:17.053511   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:15:17.053584   12824 buildroot.go:166] provisioning hostname "multinode-397400-m02"
	I0308 00:15:17.053670   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:19.028247   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:19.028247   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:19.029305   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:21.388038   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:21.388038   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:21.392836   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:21.393290   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:21.393290   12824 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400-m02 && echo "multinode-397400-m02" | sudo tee /etc/hostname
	I0308 00:15:21.544641   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400-m02
	
	I0308 00:15:21.544773   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:23.554595   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:23.554677   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:23.554715   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:25.921469   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:25.921469   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:25.927652   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:25.927652   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:25.928264   12824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:15:26.065373   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:15:26.065407   12824 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:15:26.065407   12824 buildroot.go:174] setting up certificates
	I0308 00:15:26.065407   12824 provision.go:84] configureAuth start
	I0308 00:15:26.065407   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:28.009250   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:28.009453   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:28.009537   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:30.398952   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:30.398952   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:30.398952   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:32.410308   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:32.410308   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:32.410308   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:34.747937   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:34.747937   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:34.747937   12824 provision.go:143] copyHostCerts
	I0308 00:15:34.747937   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:15:34.748459   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:15:34.748618   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:15:34.748671   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:15:34.750090   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:15:34.750710   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:15:34.750710   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:15:34.750710   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:15:34.752500   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:15:34.753037   12824 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:15:34.753037   12824 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:15:34.753347   12824 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:15:34.754698   12824 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400-m02 san=[127.0.0.1 172.20.61.226 localhost minikube multinode-397400-m02]
	I0308 00:15:34.985369   12824 provision.go:177] copyRemoteCerts
	I0308 00:15:34.995677   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:15:34.999993   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:37.002181   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:37.002643   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:37.002643   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:39.406635   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:39.406971   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:39.407034   12824 sshutil.go:53] new ssh client: &{IP:172.20.61.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:15:39.508094   12824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5080235s)
	I0308 00:15:39.508094   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:15:39.508094   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:15:39.552273   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:15:39.552673   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0308 00:15:39.596099   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:15:39.596489   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 00:15:39.638720   12824 provision.go:87] duration metric: took 13.5731857s to configureAuth
	I0308 00:15:39.638777   12824 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:15:39.639090   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:15:39.639090   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:41.617004   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:41.617646   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:41.617646   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:43.862919   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:43.862919   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:43.868276   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:43.868905   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:43.868905   12824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:15:43.988356   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:15:43.988444   12824 buildroot.go:70] root file system type: tmpfs
	I0308 00:15:43.988677   12824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:15:43.988773   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:45.810726   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:45.810726   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:45.810815   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:48.006019   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:48.016343   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:48.021821   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:48.022404   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:48.022606   12824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.48.212"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:15:48.162190   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.48.212
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:15:48.162251   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:50.035338   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:50.035403   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:50.035460   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:52.273971   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:52.273971   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:52.289477   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:15:52.290005   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:15:52.290005   12824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:15:53.355999   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:15:53.355999   12824 machine.go:97] duration metric: took 40.8003328s to provisionDockerMachine
	I0308 00:15:53.355999   12824 client.go:171] duration metric: took 1m45.8123857s to LocalClient.Create
	I0308 00:15:53.355999   12824 start.go:167] duration metric: took 1m45.8123857s to libmachine.API.Create "multinode-397400"
	I0308 00:15:53.355999   12824 start.go:293] postStartSetup for "multinode-397400-m02" (driver="hyperv")
	I0308 00:15:53.355999   12824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:15:53.367852   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:15:53.368433   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:55.250492   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:55.263233   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:55.263233   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:15:57.530884   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:15:57.530884   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:57.541819   12824 sshutil.go:53] new ssh client: &{IP:172.20.61.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:15:57.644072   12824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2761792s)
	I0308 00:15:57.654652   12824 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:15:57.661396   12824 command_runner.go:130] > NAME=Buildroot
	I0308 00:15:57.661396   12824 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:15:57.661396   12824 command_runner.go:130] > ID=buildroot
	I0308 00:15:57.661396   12824 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:15:57.661396   12824 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:15:57.661525   12824 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:15:57.661559   12824 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:15:57.661973   12824 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:15:57.662870   12824 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:15:57.662870   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:15:57.672860   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:15:57.691423   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:15:57.732305   12824 start.go:296] duration metric: took 4.376265s for postStartSetup
	I0308 00:15:57.734960   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:15:59.589663   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:15:59.589663   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:15:59.600171   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:01.863408   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:01.863408   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:01.874213   12824 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:16:01.876379   12824 start.go:128] duration metric: took 1m54.3370665s to createHost
	I0308 00:16:01.876379   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:16:03.731007   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:03.741880   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:03.741880   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:05.960335   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:05.970761   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:05.975366   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:16:05.975994   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:16:05.975994   12824 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:16:06.100856   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709856966.116435750
	
	I0308 00:16:06.100891   12824 fix.go:216] guest clock: 1709856966.116435750
	I0308 00:16:06.100936   12824 fix.go:229] Guest: 2024-03-08 00:16:06.11643575 +0000 UTC Remote: 2024-03-08 00:16:01.876379 +0000 UTC m=+312.062777401 (delta=4.24005675s)
	I0308 00:16:06.100994   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:16:07.978020   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:07.978202   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:07.978202   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:10.202346   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:10.202346   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:10.208157   12824 main.go:141] libmachine: Using SSH client type: native
	I0308 00:16:10.208748   12824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.226 22 <nil> <nil>}
	I0308 00:16:10.208775   12824 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709856966
	I0308 00:16:10.332844   12824 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:16:06 UTC 2024
	
	I0308 00:16:10.332844   12824 fix.go:236] clock set: Fri Mar  8 00:16:06 UTC 2024
	 (err=<nil>)
	I0308 00:16:10.332844   12824 start.go:83] releasing machines lock for "multinode-397400-m02", held for 2m2.7935024s
	I0308 00:16:10.332844   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:16:12.222080   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:12.235690   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:12.235934   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:14.465823   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:14.469594   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:14.472230   12824 out.go:177] * Found network options:
	I0308 00:16:14.475120   12824 out.go:177]   - NO_PROXY=172.20.48.212
	W0308 00:16:14.478545   12824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:16:14.481073   12824 out.go:177]   - NO_PROXY=172.20.48.212
	W0308 00:16:14.482256   12824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:16:14.484731   12824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:16:14.487651   12824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:16:14.487651   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:16:14.497185   12824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:16:14.497185   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:16:16.439033   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:16.439033   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:16.439033   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:16.464539   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:16.464539   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:16.466334   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:18.835077   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:18.835306   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:18.835714   12824 sshutil.go:53] new ssh client: &{IP:172.20.61.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:16:18.855358   12824 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:16:18.855358   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:18.856209   12824 sshutil.go:53] new ssh client: &{IP:172.20.61.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:16:18.934280   12824 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0308 00:16:18.934280   12824 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4370531s)
	W0308 00:16:18.934280   12824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:16:18.950907   12824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:16:19.157778   12824 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:16:19.158596   12824 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6709014s)
	I0308 00:16:19.158682   12824 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:16:19.158682   12824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:16:19.158682   12824 start.go:494] detecting cgroup driver to use...
	I0308 00:16:19.158682   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:16:19.187750   12824 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:16:19.199661   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:16:19.229106   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:16:19.248135   12824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:16:19.259543   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:16:19.287557   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:16:19.316939   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:16:19.344567   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:16:19.374979   12824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:16:19.403348   12824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:16:19.432740   12824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:16:19.439472   12824 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:16:19.457867   12824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:16:19.485630   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:19.645762   12824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:16:19.674283   12824 start.go:494] detecting cgroup driver to use...
	I0308 00:16:19.685319   12824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:16:19.689584   12824 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:16:19.705276   12824 command_runner.go:130] > [Unit]
	I0308 00:16:19.705276   12824 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:16:19.705276   12824 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:16:19.705276   12824 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:16:19.705276   12824 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:16:19.705345   12824 command_runner.go:130] > StartLimitBurst=3
	I0308 00:16:19.705345   12824 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:16:19.705395   12824 command_runner.go:130] > [Service]
	I0308 00:16:19.705430   12824 command_runner.go:130] > Type=notify
	I0308 00:16:19.705430   12824 command_runner.go:130] > Restart=on-failure
	I0308 00:16:19.705430   12824 command_runner.go:130] > Environment=NO_PROXY=172.20.48.212
	I0308 00:16:19.705472   12824 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:16:19.705532   12824 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:16:19.705532   12824 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:16:19.705568   12824 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:16:19.705568   12824 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:16:19.705595   12824 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:16:19.705595   12824 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:16:19.705626   12824 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:16:19.705626   12824 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:16:19.705662   12824 command_runner.go:130] > ExecStart=
	I0308 00:16:19.705689   12824 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:16:19.705720   12824 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:16:19.705785   12824 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:16:19.705816   12824 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:16:19.705816   12824 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:16:19.705816   12824 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:16:19.705816   12824 command_runner.go:130] > LimitCORE=infinity
	I0308 00:16:19.705816   12824 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:16:19.705853   12824 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:16:19.705885   12824 command_runner.go:130] > TasksMax=infinity
	I0308 00:16:19.705885   12824 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:16:19.705885   12824 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:16:19.705916   12824 command_runner.go:130] > Delegate=yes
	I0308 00:16:19.705916   12824 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:16:19.705954   12824 command_runner.go:130] > KillMode=process
	I0308 00:16:19.705954   12824 command_runner.go:130] > [Install]
	I0308 00:16:19.705991   12824 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:16:19.715423   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:16:19.743751   12824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:16:19.775755   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:16:19.807320   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:16:19.838179   12824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:16:19.893640   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:16:19.913997   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:16:19.946020   12824 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:16:19.957681   12824 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:16:19.962009   12824 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:16:19.980401   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:16:20.004157   12824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:16:20.046132   12824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:16:20.219623   12824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:16:20.377349   12824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:16:20.377454   12824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:16:20.417423   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:20.583336   12824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:16:22.090793   12824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5074427s)
	I0308 00:16:22.105479   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:16:22.135599   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:16:22.167826   12824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:16:22.334101   12824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:16:22.502089   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:22.664533   12824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:16:22.703267   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:16:22.736699   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:22.902748   12824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:16:22.990491   12824 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:16:23.005570   12824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:16:23.012708   12824 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:16:23.012747   12824 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:16:23.012747   12824 command_runner.go:130] > Device: 0,22	Inode: 890         Links: 1
	I0308 00:16:23.012747   12824 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:16:23.012747   12824 command_runner.go:130] > Access: 2024-03-08 00:16:22.944517589 +0000
	I0308 00:16:23.012747   12824 command_runner.go:130] > Modify: 2024-03-08 00:16:22.944517589 +0000
	I0308 00:16:23.012747   12824 command_runner.go:130] > Change: 2024-03-08 00:16:22.947517601 +0000
	I0308 00:16:23.012747   12824 command_runner.go:130] >  Birth: -
	I0308 00:16:23.012747   12824 start.go:562] Will wait 60s for crictl version
	I0308 00:16:23.024994   12824 ssh_runner.go:195] Run: which crictl
	I0308 00:16:23.027533   12824 command_runner.go:130] > /usr/bin/crictl
	I0308 00:16:23.039223   12824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:16:23.104462   12824 command_runner.go:130] > Version:  0.1.0
	I0308 00:16:23.104462   12824 command_runner.go:130] > RuntimeName:  docker
	I0308 00:16:23.104462   12824 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:16:23.104576   12824 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:16:23.104576   12824 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:16:23.113346   12824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:16:23.148477   12824 command_runner.go:130] > 24.0.7
	I0308 00:16:23.158518   12824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:16:23.190015   12824 command_runner.go:130] > 24.0.7
	I0308 00:16:23.194108   12824 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:16:23.196684   12824 out.go:177]   - env NO_PROXY=172.20.48.212
	I0308 00:16:23.199198   12824 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:16:23.202968   12824 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:16:23.202968   12824 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:16:23.202968   12824 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:16:23.202968   12824 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:16:23.203390   12824 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:16:23.203390   12824 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:16:23.215825   12824 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:16:23.217738   12824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:16:23.237983   12824 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:16:23.239962   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:16:23.240670   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:16:25.100068   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:25.100068   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:25.100068   12824 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:16:25.101148   12824 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.61.226
	I0308 00:16:25.101148   12824 certs.go:194] generating shared ca certs ...
	I0308 00:16:25.101249   12824 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:16:25.101319   12824 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:16:25.102096   12824 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:16:25.102250   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:16:25.102408   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:16:25.102599   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:16:25.102817   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:16:25.103345   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:16:25.103649   12824 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:16:25.103718   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:16:25.104004   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:16:25.104276   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:16:25.104276   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:16:25.104853   12824 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:16:25.105232   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:16:25.105282   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:16:25.105282   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:16:25.105282   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:16:25.148692   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:16:25.189404   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:16:25.227643   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:16:25.264800   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:16:25.303642   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:16:25.343269   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:16:25.392468   12824 ssh_runner.go:195] Run: openssl version
	I0308 00:16:25.399756   12824 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:16:25.410528   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:16:25.438544   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:16:25.445922   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:16:25.446311   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:16:25.458171   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:16:25.465272   12824 command_runner.go:130] > 3ec20f2e
	I0308 00:16:25.475200   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:16:25.505237   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:16:25.534306   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:16:25.540355   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:16:25.540355   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:16:25.551431   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:16:25.561198   12824 command_runner.go:130] > b5213941
	I0308 00:16:25.571839   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:16:25.601998   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:16:25.630502   12824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:16:25.640008   12824 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:16:25.640051   12824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:16:25.651619   12824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:16:25.659451   12824 command_runner.go:130] > 51391683
	I0308 00:16:25.669467   12824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:16:25.699704   12824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:16:25.702191   12824 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:16:25.705591   12824 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:16:25.705838   12824 kubeadm.go:928] updating node {m02 172.20.61.226 8443 v1.28.4 docker false true} ...
	I0308 00:16:25.705838   12824 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.61.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:16:25.715526   12824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:16:25.731357   12824 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0308 00:16:25.731357   12824 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0308 00:16:25.744398   12824 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0308 00:16:25.761628   12824 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0308 00:16:25.761659   12824 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0308 00:16:25.761781   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 00:16:25.761803   12824 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0308 00:16:25.762168   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 00:16:25.773711   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:16:25.776938   12824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0308 00:16:25.777448   12824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0308 00:16:25.794230   12824 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 00:16:25.794230   12824 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 00:16:25.795808   12824 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0308 00:16:25.795808   12824 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 00:16:25.795808   12824 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0308 00:16:25.795987   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0308 00:16:25.796109   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0308 00:16:25.807645   12824 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0308 00:16:25.852454   12824 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 00:16:25.863291   12824 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0308 00:16:25.863291   12824 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0308 00:16:28.117145   12824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0308 00:16:28.132829   12824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0308 00:16:28.162101   12824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:16:28.197497   12824 ssh_runner.go:195] Run: grep 172.20.48.212	control-plane.minikube.internal$ /etc/hosts
	I0308 00:16:28.204744   12824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.48.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:16:28.232525   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:28.409352   12824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:16:28.439590   12824 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:16:28.439901   12824 start.go:316] joinCluster: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:16:28.439901   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 00:16:28.440533   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:16:30.327461   12824 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:16:30.327461   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:30.327461   12824 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:16:32.561304   12824 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:16:32.561304   12824 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:16:32.572149   12824 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:16:32.743235   12824 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nsprf0.ytfefs8zv9xvacgi --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:16:32.743368   12824 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.303293s)
	I0308 00:16:32.743481   12824 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:16:32.743560   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nsprf0.ytfefs8zv9xvacgi --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02"
	I0308 00:16:32.953385   12824 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:16:35.264418   12824 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:16:35.264418   12824 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0308 00:16:35.264418   12824 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0308 00:16:35.264418   12824 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:16:35.264418   12824 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:16:35.264418   12824 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:16:35.264418   12824 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0308 00:16:35.264418   12824 command_runner.go:130] > This node has joined the cluster:
	I0308 00:16:35.264418   12824 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0308 00:16:35.264418   12824 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0308 00:16:35.264418   12824 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0308 00:16:35.264418   12824 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nsprf0.ytfefs8zv9xvacgi --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02": (2.5208341s)
	I0308 00:16:35.264418   12824 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 00:16:35.469365   12824 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0308 00:16:35.653646   12824 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400-m02 minikube.k8s.io/updated_at=2024_03_08T00_16_35_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=false
	I0308 00:16:35.774722   12824 command_runner.go:130] > node/multinode-397400-m02 labeled
	I0308 00:16:35.776517   12824 start.go:318] duration metric: took 7.3365466s to joinCluster
	I0308 00:16:35.776517   12824 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:16:35.781147   12824 out.go:177] * Verifying Kubernetes components...
	I0308 00:16:35.777304   12824 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:16:35.794319   12824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:16:35.988902   12824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:16:36.014962   12824 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:16:36.015667   12824 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.48.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:16:36.016130   12824 node_ready.go:35] waiting up to 6m0s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:16:36.016801   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:36.016837   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:36.016837   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:36.016837   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:36.030436   12824 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0308 00:16:36.030677   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:36.030677   12824 round_trippers.go:580]     Content-Length: 4043
	I0308 00:16:36.030677   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:36 GMT
	I0308 00:16:36.030677   12824 round_trippers.go:580]     Audit-Id: 13807715-f617-4b4c-b88a-ca6803e46851
	I0308 00:16:36.030677   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:36.030677   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:36.030677   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:36.030677   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:36.030677   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"594","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 3019 chars]
	I0308 00:16:36.524578   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:36.524801   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:36.524852   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:36.524883   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:36.525999   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:16:36.525999   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:36.525999   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:36.525999   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:36.525999   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:36 GMT
	I0308 00:16:36.525999   12824 round_trippers.go:580]     Audit-Id: 39bf2b8d-0ff1-4a94-8ffe-7497e7f13a9c
	I0308 00:16:36.525999   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:36.525999   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:36.528736   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:37.029279   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:37.029279   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:37.029279   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:37.029279   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:37.029672   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:37.033960   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:37.033960   12824 round_trippers.go:580]     Audit-Id: 2c601055-cbf3-427e-a5e4-cd363fcd9965
	I0308 00:16:37.033960   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:37.033960   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:37.033960   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:37.033960   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:37.033960   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:37 GMT
	I0308 00:16:37.034167   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:37.528585   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:37.528585   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:37.528585   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:37.528585   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:37.529133   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:37.529133   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:37.529133   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:37.529133   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:37 GMT
	I0308 00:16:37.529133   12824 round_trippers.go:580]     Audit-Id: 03ebc546-f330-4845-97fb-75ea08f7d251
	I0308 00:16:37.529133   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:37.529133   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:37.529133   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:37.532992   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:38.024062   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:38.024062   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:38.024062   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:38.024062   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:38.024581   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:38.024581   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:38.028806   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:38 GMT
	I0308 00:16:38.028806   12824 round_trippers.go:580]     Audit-Id: 53d4f156-e3ce-449a-8adb-177fa67b6463
	I0308 00:16:38.028806   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:38.028806   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:38.028806   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:38.028806   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:38.028888   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:38.028888   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:38.527511   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:38.527511   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:38.527511   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:38.527511   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:38.531661   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:38.531661   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:38.531661   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:38.531661   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:38.531661   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:38.531661   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:38.531661   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:38 GMT
	I0308 00:16:38.531661   12824 round_trippers.go:580]     Audit-Id: 0b63c082-89f2-4582-ab6b-5708848844d1
	I0308 00:16:38.532002   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:39.019358   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:39.019437   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:39.019437   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:39.019437   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:39.019732   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:39.023035   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:39.023035   12824 round_trippers.go:580]     Audit-Id: 5f0e9a7d-7017-4dfb-9bbb-5407570cda29
	I0308 00:16:39.023035   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:39.023035   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:39.023035   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:39.023035   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:39.023035   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:39 GMT
	I0308 00:16:39.023035   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:39.520693   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:39.520765   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:39.520765   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:39.520765   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:39.521097   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:39.521097   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:39.521097   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:39.521097   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:39.521097   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:39 GMT
	I0308 00:16:39.521097   12824 round_trippers.go:580]     Audit-Id: 4a537fc7-8683-407a-9abb-d3a430b77420
	I0308 00:16:39.521097   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:39.521097   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:39.524207   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:40.017804   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:40.017804   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:40.017804   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:40.017804   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:40.018347   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:40.018347   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:40.018347   12824 round_trippers.go:580]     Audit-Id: 66297378-9414-46af-ba5b-c24be2600e76
	I0308 00:16:40.018347   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:40.018347   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:40.021401   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:40.021401   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:40.021401   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:40 GMT
	I0308 00:16:40.021475   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:40.516916   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:40.517181   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:40.517207   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:40.517207   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:40.517839   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:40.517839   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:40.517839   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:40 GMT
	I0308 00:16:40.517839   12824 round_trippers.go:580]     Audit-Id: 3bdc3b18-acab-4605-a2d2-f444df8b1b64
	I0308 00:16:40.517839   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:40.517839   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:40.517839   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:40.517839   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:40.521368   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:40.522046   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:41.032786   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:41.032786   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:41.032786   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:41.032786   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:41.034924   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:41.036127   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:41.036127   12824 round_trippers.go:580]     Audit-Id: 48ea5e93-b715-4a91-9067-7da8e1540624
	I0308 00:16:41.036127   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:41.036127   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:41.036127   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:41.036127   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:41.036127   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:41 GMT
	I0308 00:16:41.036127   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:41.528804   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:41.528804   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:41.528804   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:41.528804   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:41.529390   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:41.532597   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:41.532597   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:41.532597   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:41.532597   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:41.532597   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:41 GMT
	I0308 00:16:41.532597   12824 round_trippers.go:580]     Audit-Id: 5ae04382-4658-4d1f-b797-fcb387d54717
	I0308 00:16:41.532597   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:41.532597   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:42.032559   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:42.032559   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:42.032559   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:42.032559   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:42.033517   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:42.033517   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:42.033517   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:42.033517   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:42.033517   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:42.033517   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:42 GMT
	I0308 00:16:42.037144   12824 round_trippers.go:580]     Audit-Id: 7f73b819-660c-4690-877b-6d5322d4d0be
	I0308 00:16:42.037144   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:42.037183   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:42.521838   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:42.521908   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:42.521908   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:42.521942   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:42.523630   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:16:42.523630   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:42.523630   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:42.523630   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:42 GMT
	I0308 00:16:42.523630   12824 round_trippers.go:580]     Audit-Id: a60885d9-aeec-4dcf-a1ff-98420e52e6a0
	I0308 00:16:42.523630   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:42.523630   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:42.523630   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:42.523630   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:42.525916   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:43.028929   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:43.029070   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:43.029070   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:43.029070   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:43.036243   12824 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0308 00:16:43.036243   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:43.036243   12824 round_trippers.go:580]     Audit-Id: 5a1ab1b3-635e-4e35-9058-392607cb1d2f
	I0308 00:16:43.036243   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:43.036243   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:43.036243   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:43.036243   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:43.036243   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:43 GMT
	I0308 00:16:43.036243   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:43.522634   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:43.522875   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:43.522875   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:43.522875   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:43.528126   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:16:43.528310   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:43.528310   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:43.528310   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:43 GMT
	I0308 00:16:43.528354   12824 round_trippers.go:580]     Audit-Id: 55579af8-2b5a-4568-96c9-b64c6099205e
	I0308 00:16:43.528354   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:43.528354   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:43.528384   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:43.528384   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:44.025242   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:44.025633   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:44.025633   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:44.025633   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:44.026509   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:44.026509   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:44.026509   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:44.026509   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:44.026509   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:44 GMT
	I0308 00:16:44.026509   12824 round_trippers.go:580]     Audit-Id: 4915e3f1-82a0-4b13-a142-25dd0534372c
	I0308 00:16:44.026509   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:44.026509   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:44.029774   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:44.525144   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:44.525144   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:44.525144   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:44.525144   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:44.525830   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:44.525830   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:44.525830   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:44 GMT
	I0308 00:16:44.525830   12824 round_trippers.go:580]     Audit-Id: 11dbd295-132d-408d-91a1-216d74b82229
	I0308 00:16:44.525830   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:44.525830   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:44.525830   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:44.525830   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:44.528479   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:44.528479   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:45.029546   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:45.029546   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:45.029546   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:45.029546   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:45.034610   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:16:45.034610   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:45.034610   12824 round_trippers.go:580]     Audit-Id: 399d4720-315e-4b26-ad98-ac5f816b9af3
	I0308 00:16:45.034610   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:45.034610   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:45.034610   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:45.034610   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:45.034610   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:45 GMT
	I0308 00:16:45.034610   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"596","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3128 chars]
	I0308 00:16:45.528009   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:45.528009   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:45.528009   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:45.528009   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:45.539767   12824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0308 00:16:45.539767   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:45.539767   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:45 GMT
	I0308 00:16:45.539767   12824 round_trippers.go:580]     Audit-Id: 9105887d-4ffd-4398-842c-7a4c04d631e2
	I0308 00:16:45.547678   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:45.547678   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:45.547678   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:45.547678   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:45.547953   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:46.023536   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:46.023536   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:46.023536   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:46.023536   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:46.024703   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:16:46.027979   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:46.028021   12824 round_trippers.go:580]     Audit-Id: 32ade9e8-3947-43c9-b33c-0214237e54dd
	I0308 00:16:46.028021   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:46.028021   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:46.028021   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:46.028063   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:46.028063   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:46 GMT
	I0308 00:16:46.028063   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:46.526924   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:46.526993   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:46.526993   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:46.527043   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:46.529420   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:46.530851   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:46.530851   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:46 GMT
	I0308 00:16:46.530851   12824 round_trippers.go:580]     Audit-Id: a1af2577-045b-4df4-bf9d-d877084e2e0a
	I0308 00:16:46.530851   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:46.530851   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:46.530851   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:46.530851   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:46.531206   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:46.531206   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:47.017480   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:47.017480   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:47.017480   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:47.017480   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:47.020393   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:47.021825   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:47.021825   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:47.021825   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:47.021825   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:47.021825   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:47.021825   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:47 GMT
	I0308 00:16:47.021825   12824 round_trippers.go:580]     Audit-Id: bc069b91-0cec-411f-8a5a-24e365b23ea3
	I0308 00:16:47.021918   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:47.527989   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:47.528021   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:47.528021   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:47.528021   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:47.530983   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:47.532136   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:47.532136   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:47.532136   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:47 GMT
	I0308 00:16:47.532136   12824 round_trippers.go:580]     Audit-Id: 42b67099-ddb0-42d0-ab69-e126e285f8bc
	I0308 00:16:47.532136   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:47.532136   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:47.532136   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:47.532136   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:48.023250   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:48.023353   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:48.023353   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:48.023353   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:48.026921   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:16:48.026921   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:48.026921   12824 round_trippers.go:580]     Audit-Id: 3e126e53-7782-42ec-9dae-d71bf5562efe
	I0308 00:16:48.026921   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:48.026921   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:48.026921   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:48.026921   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:48.027181   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:48 GMT
	I0308 00:16:48.027842   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:48.536252   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:48.536363   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:48.536363   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:48.536363   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:48.536772   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:48.539704   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:48.539704   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:48.539704   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:48.539704   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:48 GMT
	I0308 00:16:48.539916   12824 round_trippers.go:580]     Audit-Id: 5d4efc31-41bf-45aa-ac26-72ea938b6426
	I0308 00:16:48.539966   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:48.539966   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:48.539999   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:48.540866   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:49.022972   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:49.023270   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:49.023405   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:49.023618   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:49.028471   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:49.028529   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:49.028529   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:49.028529   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:49 GMT
	I0308 00:16:49.028529   12824 round_trippers.go:580]     Audit-Id: e28ba0cb-76fe-4501-8709-8a245d881086
	I0308 00:16:49.028529   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:49.028529   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:49.028529   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:49.028529   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:49.525294   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:49.525369   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:49.525369   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:49.525369   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:49.526021   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:49.526021   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:49.529666   12824 round_trippers.go:580]     Audit-Id: b9300990-9077-4410-8906-d221957b46c5
	I0308 00:16:49.529666   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:49.529666   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:49.529666   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:49.529666   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:49.529666   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:49 GMT
	I0308 00:16:49.530029   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:50.029288   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:50.029288   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:50.029288   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:50.029288   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:50.032707   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:50.032707   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:50.032707   12824 round_trippers.go:580]     Audit-Id: 36cbbeb8-03ee-4cd6-82b1-ede85a8d0671
	I0308 00:16:50.032707   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:50.032707   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:50.032707   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:50.032707   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:50.032707   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:50 GMT
	I0308 00:16:50.032707   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:50.527090   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:50.527280   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:50.527280   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:50.527280   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:50.530116   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:50.530735   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:50.530735   12824 round_trippers.go:580]     Audit-Id: ca97e624-cc62-4b3d-b4b7-9802ccab5e27
	I0308 00:16:50.530735   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:50.530735   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:50.530735   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:50.530837   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:50.530837   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:50 GMT
	I0308 00:16:50.531655   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:51.025446   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:51.025446   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:51.025446   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:51.025446   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:51.028738   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:16:51.028915   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:51.028915   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:51 GMT
	I0308 00:16:51.028915   12824 round_trippers.go:580]     Audit-Id: 05b4bb19-f27f-4e5d-a73d-8da9a5e2b753
	I0308 00:16:51.028915   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:51.028966   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:51.028966   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:51.028966   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:51.029142   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:51.029614   12824 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:16:51.529396   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:51.529489   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:51.529489   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:51.529489   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:51.529796   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:51.532867   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:51.532867   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:51 GMT
	I0308 00:16:51.532867   12824 round_trippers.go:580]     Audit-Id: d82ff573-7e13-4b16-8e25-6c19a1ebbe33
	I0308 00:16:51.532867   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:51.532867   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:51.532867   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:51.532867   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:51.533172   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:52.036258   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:52.036258   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:52.036258   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:52.036362   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:52.042333   12824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:16:52.042374   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:52.042374   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:52 GMT
	I0308 00:16:52.042374   12824 round_trippers.go:580]     Audit-Id: f6086eaf-1cb3-44a0-9bdc-2b47393f36f6
	I0308 00:16:52.042374   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:52.042447   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:52.042447   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:52.042471   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:52.043397   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:52.519581   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:52.519581   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:52.519581   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:52.519581   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:52.520179   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:52.524216   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:52.524216   12824 round_trippers.go:580]     Audit-Id: aaa8bef3-5dec-475d-aef5-14d99e0bd801
	I0308 00:16:52.524273   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:52.524273   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:52.524273   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:52.524273   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:52.524273   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:52 GMT
	I0308 00:16:52.524273   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"614","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0308 00:16:53.021643   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:53.021643   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.021643   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.021643   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.024740   12824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:16:53.024740   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.024740   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.024740   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.024740   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.024740   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.024740   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.024740   12824 round_trippers.go:580]     Audit-Id: 9896ddf0-3045-457a-8110-cf3afdf47b4b
	I0308 00:16:53.031219   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"627","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3324 chars]
	I0308 00:16:53.031486   12824 node_ready.go:49] node "multinode-397400-m02" has status "Ready":"True"
	I0308 00:16:53.031486   12824 node_ready.go:38] duration metric: took 17.0151948s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:16:53.031486   12824 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:16:53.031486   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods
	I0308 00:16:53.031486   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.031486   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.031486   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.038950   12824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:16:53.038950   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.038950   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.038950   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.038950   12824 round_trippers.go:580]     Audit-Id: dee3f34c-9e16-47b1-8c87-00d2e81f8771
	I0308 00:16:53.038950   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.038950   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.038950   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.041559   12824 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"627"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"444","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67474 chars]
	I0308 00:16:53.045513   12824 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.045672   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:16:53.045672   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.045672   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.045672   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.046459   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.049173   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.049173   12824 round_trippers.go:580]     Audit-Id: 35214834-86ea-4103-bb1a-ba4b9ae423fc
	I0308 00:16:53.049173   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.049173   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.049173   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.049173   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.049173   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.049746   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"444","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0308 00:16:53.050445   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.050445   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.050445   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.050445   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.053086   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:53.053086   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.053086   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.058163   12824 round_trippers.go:580]     Audit-Id: db7ece51-1fc1-4670-99dd-9359bd8dc213
	I0308 00:16:53.058238   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.058238   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.058238   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.058238   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.058873   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:53.059576   12824 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.059645   12824 pod_ready.go:81] duration metric: took 14.1319ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.059755   12824 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.059822   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:16:53.059822   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.059822   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.059822   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.060525   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.060525   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.060525   12824 round_trippers.go:580]     Audit-Id: 17e09d08-a5d1-4455-8a5d-8f86f5df6d7b
	I0308 00:16:53.060525   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.060525   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.060525   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.060525   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.060525   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.063447   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"e576042a-07ca-47b1-b815-88318bfc734e","resourceVersion":"322","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.48.212:2379","kubernetes.io/config.hash":"fc65775229edb6b7e62a37e01d988ef3","kubernetes.io/config.mirror":"fc65775229edb6b7e62a37e01d988ef3","kubernetes.io/config.seen":"2024-03-08T00:13:39.441051880Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0308 00:16:53.063994   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.064177   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.064177   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.064177   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.065459   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:16:53.065459   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.065459   12824 round_trippers.go:580]     Audit-Id: 491954bc-5b31-4777-9c3c-31694a1a67a2
	I0308 00:16:53.065459   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.065459   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.065459   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.065459   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.065459   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.067990   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:53.068865   12824 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.068895   12824 pod_ready.go:81] duration metric: took 9.0731ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.068927   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.068927   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:16:53.068927   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.068927   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.068927   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.082743   12824 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0308 00:16:53.087293   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.087293   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.087293   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.087293   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.087293   12824 round_trippers.go:580]     Audit-Id: 8545c9b3-3e78-4eba-be50-9e635fbad897
	I0308 00:16:53.087293   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.087293   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.087559   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"084257fc-8f2b-4540-8b93-3d11bed62c3b","resourceVersion":"317","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.48.212:8443","kubernetes.io/config.hash":"e54af4aacb740938efeadd3de88c5b29","kubernetes.io/config.mirror":"e54af4aacb740938efeadd3de88c5b29","kubernetes.io/config.seen":"2024-03-08T00:13:39.441056480Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0308 00:16:53.088150   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.088245   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.088245   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.088245   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.089648   12824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:16:53.090911   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.090911   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.090911   12824 round_trippers.go:580]     Audit-Id: 07699139-8eac-4b34-bdc0-e8b0a3c7083f
	I0308 00:16:53.090911   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.090911   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.090911   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.090911   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.090911   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:53.090911   12824 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.090911   12824 pod_ready.go:81] duration metric: took 21.984ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.090911   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.091501   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:16:53.091501   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.091501   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.091501   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.093557   12824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:16:53.093557   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.094937   12824 round_trippers.go:580]     Audit-Id: a24eb9f8-226f-4e8e-a2e4-80327d066247
	I0308 00:16:53.094937   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.094937   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.094937   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.094937   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.094937   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.094937   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"316","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0308 00:16:53.095840   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.095840   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.095840   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.095840   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.096454   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.096454   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.096454   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.096454   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.096454   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.099237   12824 round_trippers.go:580]     Audit-Id: e675ac89-80c0-4ed5-88c7-8012571b67d0
	I0308 00:16:53.099237   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.099237   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.099575   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:53.100046   12824 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.100077   12824 pod_ready.go:81] duration metric: took 8.5762ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.100129   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.229166   12824 request.go:629] Waited for 128.7977ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:16:53.229455   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:16:53.229455   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.229455   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.229455   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.229983   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.229983   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.229983   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.229983   12824 round_trippers.go:580]     Audit-Id: 1d03fa76-a83e-48cf-a532-36a88ee46367
	I0308 00:16:53.229983   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.229983   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.229983   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.229983   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.233615   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"610","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0308 00:16:53.424254   12824 request.go:629] Waited for 189.8394ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:53.424337   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:16:53.424512   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.424595   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.424595   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.424801   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.424801   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.424801   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.424801   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.427464   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.427464   12824 round_trippers.go:580]     Audit-Id: 6de1a2b7-87ad-493b-89b1-965597267e1c
	I0308 00:16:53.427464   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.427464   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.427759   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"628","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3263 chars]
	I0308 00:16:53.427759   12824 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.427759   12824 pod_ready.go:81] duration metric: took 327.6266ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.427759   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.626845   12824 request.go:629] Waited for 198.3563ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:16:53.626940   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:16:53.626940   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.626940   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.626940   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.627298   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.630465   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.630465   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.630465   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.630465   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.630465   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.630465   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.630465   12824 round_trippers.go:580]     Audit-Id: 44085198-8af3-4b7c-83e5-85b4d6de28e8
	I0308 00:16:53.630537   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"403","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0308 00:16:53.834486   12824 request.go:629] Waited for 202.9226ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.834671   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:53.834742   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:53.834742   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:53.834802   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:53.835080   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:53.838600   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:53.838600   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:53.838600   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:53.838600   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:53 GMT
	I0308 00:16:53.838600   12824 round_trippers.go:580]     Audit-Id: 0c20991c-a0d6-47ab-a9fb-a5ae92d7504f
	I0308 00:16:53.838600   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:53.838600   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:53.838938   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:53.839599   12824 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:53.839599   12824 pod_ready.go:81] duration metric: took 411.8367ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:53.839599   12824 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:54.028207   12824 request.go:629] Waited for 188.6061ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:16:54.028523   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:16:54.028672   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:54.028672   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:54.028672   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:54.029448   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:54.032416   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:54.032416   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:54.032416   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:54 GMT
	I0308 00:16:54.032416   12824 round_trippers.go:580]     Audit-Id: 319653ba-7dc3-495d-972f-e1e9339a9ec3
	I0308 00:16:54.032416   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:54.032416   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:54.032416   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:54.032631   12824 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"313","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0308 00:16:54.232512   12824 request.go:629] Waited for 198.8745ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:54.232584   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes/multinode-397400
	I0308 00:16:54.232584   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:54.232584   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:54.232584   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:54.233129   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:54.233129   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:54.236135   12824 round_trippers.go:580]     Audit-Id: c136529d-a009-4256-84e7-fc0ac6f84bee
	I0308 00:16:54.236135   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:54.236135   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:54.236135   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:54.236135   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:54.236135   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:54 GMT
	I0308 00:16:54.236347   12824 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0308 00:16:54.238209   12824 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:16:54.238209   12824 pod_ready.go:81] duration metric: took 398.6056ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:16:54.238209   12824 pod_ready.go:38] duration metric: took 1.2067109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:16:54.238209   12824 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:16:54.248491   12824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:16:54.269964   12824 system_svc.go:56] duration metric: took 31.7553ms WaitForService to wait for kubelet
	I0308 00:16:54.269999   12824 kubeadm.go:576] duration metric: took 18.4933062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:16:54.269999   12824 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:16:54.424807   12824 request.go:629] Waited for 154.6763ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.48.212:8443/api/v1/nodes
	I0308 00:16:54.425110   12824 round_trippers.go:463] GET https://172.20.48.212:8443/api/v1/nodes
	I0308 00:16:54.425212   12824 round_trippers.go:469] Request Headers:
	I0308 00:16:54.425244   12824 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:16:54.425272   12824 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:16:54.425989   12824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:16:54.425989   12824 round_trippers.go:577] Response Headers:
	I0308 00:16:54.425989   12824 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:16:54.425989   12824 round_trippers.go:580]     Content-Type: application/json
	I0308 00:16:54.425989   12824 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:16:54.425989   12824 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:16:54.425989   12824 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:16:54 GMT
	I0308 00:16:54.425989   12824 round_trippers.go:580]     Audit-Id: 8e6e7a2b-dd2c-48fb-90d3-656444996252
	I0308 00:16:54.429221   12824 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"629"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"454","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9266 chars]
	I0308 00:16:54.429873   12824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:16:54.430008   12824 node_conditions.go:123] node cpu capacity is 2
	I0308 00:16:54.430095   12824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:16:54.430095   12824 node_conditions.go:123] node cpu capacity is 2
	I0308 00:16:54.430150   12824 node_conditions.go:105] duration metric: took 160.0559ms to run NodePressure ...
	I0308 00:16:54.430150   12824 start.go:240] waiting for startup goroutines ...
	I0308 00:16:54.430150   12824 start.go:254] writing updated cluster config ...
	I0308 00:16:54.440192   12824 ssh_runner.go:195] Run: rm -f paused
	I0308 00:16:54.573092   12824 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 00:16:54.577293   12824 out.go:177] * Done! kubectl is now configured to use "multinode-397400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.356780530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.366065770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.366233370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.366318271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.366878273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 cri-dockerd[1205]: time="2024-03-08T00:14:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fdffd4f1db96359702b6e1b9e7ff73ab1f1b844e6a3b1f11852af238ae5d2701/resolv.conf as [nameserver 172.20.48.1]"
	Mar 08 00:14:04 multinode-397400 cri-dockerd[1205]: time="2024-03-08T00:14:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/13e6ea5ce4bdc0a7325e43205a9a70dbb67ebf8fec024d951734f43232419dc0/resolv.conf as [nameserver 172.20.48.1]"
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.699622367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.699769367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.699801968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.700171269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.819801178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.819956279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.819976179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:14:04 multinode-397400 dockerd[1314]: time="2024-03-08T00:14:04.820073079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:17:17 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:17.587317990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:17:17 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:17.587962193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:17:17 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:17.588075694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:17:17 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:17.588909198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:17:17 multinode-397400 cri-dockerd[1205]: time="2024-03-08T00:17:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdb14ba5528098cf8a62ccb7d77596ca119641b3ebe5eda2ae7d1bd2aedd597c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 08 00:17:18 multinode-397400 cri-dockerd[1205]: time="2024-03-08T00:17:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 08 00:17:19 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:19.108738341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:17:19 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:19.108873242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:17:19 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:19.108894942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:17:19 multinode-397400 dockerd[1314]: time="2024-03-08T00:17:19.109024443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce9a9bc4cfe37       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   cdb14ba552809       busybox-5b5d89c9d6-j7ck4
	b8903699a2e38       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   13e6ea5ce4bdc       coredns-5dd5756b68-w4hzh
	84e1da671abd2       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   fdffd4f1db963       storage-provisioner
	91ada1ebb521d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   90ba9a9d99a3d       kindnet-wkwtm
	79433b5ca644a       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   9c957cee5d35c       kube-proxy-nt8td
	0aaf57b801fb8       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   d4b57713d4316       kube-scheduler-multinode-397400
	4f8851b134589       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   ead2ed31c6b3d       kube-controller-manager-multinode-397400
	23ccdb1fc3b53       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   6b6ed8345b8fa       kube-apiserver-multinode-397400
	c0241fd304ad6       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   45fec6e97f7a8       etcd-multinode-397400
	
	
	==> coredns [b8903699a2e3] <==
	[INFO] 10.244.1.2:38583 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092301s
	[INFO] 10.244.0.3:51514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000902s
	[INFO] 10.244.0.3:34101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146601s
	[INFO] 10.244.0.3:39343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125001s
	[INFO] 10.244.0.3:51579 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202401s
	[INFO] 10.244.0.3:34574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000234402s
	[INFO] 10.244.0.3:41474 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161301s
	[INFO] 10.244.0.3:56490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117701s
	[INFO] 10.244.0.3:47237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125501s
	[INFO] 10.244.1.2:57949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186801s
	[INFO] 10.244.1.2:51978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082601s
	[INFO] 10.244.1.2:53464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123401s
	[INFO] 10.244.1.2:60851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124401s
	[INFO] 10.244.0.3:47849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000966s
	[INFO] 10.244.0.3:33374 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329903s
	[INFO] 10.244.0.3:33498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231301s
	[INFO] 10.244.0.3:49302 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158701s
	[INFO] 10.244.1.2:57262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157901s
	[INFO] 10.244.1.2:56667 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185301s
	[INFO] 10.244.1.2:47521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193002s
	[INFO] 10.244.1.2:51329 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000258401s
	[INFO] 10.244.0.3:49110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166601s
	[INFO] 10.244.0.3:55134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128401s
	[INFO] 10.244.0.3:43988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051301s
	[INFO] 10.244.0.3:49870 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000082101s
	
	
	==> describe nodes <==
	Name:               multinode-397400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T00_13_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:13:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:17:44 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:17:44 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:17:44 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:17:44 +0000   Fri, 08 Mar 2024 00:14:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.48.212
	  Hostname:    multinode-397400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b4483af5bd24f80a53788a63b6ff28a
	  System UUID:                8391dbcb-b4b7-5845-b9ff-a5eba8cddcb5
	  Boot ID:                    77a7f926-228e-4a7f-b583-b0e572dd44fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-j7ck4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-5dd5756b68-w4hzh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m13s
	  kube-system                 etcd-multinode-397400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 kindnet-wkwtm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m14s
	  kube-system                 kube-apiserver-multinode-397400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-multinode-397400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-proxy-nt8td                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-scheduler-multinode-397400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m35s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m35s)  kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m35s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m26s                  kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s                  kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s                  kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m14s                  node-controller  Node multinode-397400 event: Registered Node multinode-397400 in Controller
	  Normal  NodeReady                4m2s                   kubelet          Node multinode-397400 status is now: NodeReady
	
	
	Name:               multinode-397400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T00_16_35_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:16:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:17:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:17:36 +0000   Fri, 08 Mar 2024 00:16:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:17:36 +0000   Fri, 08 Mar 2024 00:16:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:17:36 +0000   Fri, 08 Mar 2024 00:16:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:17:36 +0000   Fri, 08 Mar 2024 00:16:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.61.226
	  Hostname:    multinode-397400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 a47f3f2075a648cf88231f7223b27fb7
	  System UUID:                12e9ba38-a8d8-e14f-9556-c9cd17fe7785
	  Boot ID:                    66b04048-3a52-4d71-8c4f-9d1919f0f324
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ctt42    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-jvzwq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      90s
	  kube-system                 kube-proxy-gw9w9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 80s                kube-proxy       
	  Normal  NodeHasSufficientMemory  90s (x5 over 92s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x5 over 92s)  kubelet          Node multinode-397400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x5 over 92s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           89s                node-controller  Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller
	  Normal  NodeReady                72s                kubelet          Node multinode-397400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar 8 00:12] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.182430] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Mar 8 00:13] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.084018] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.464966] systemd-fstab-generator[973]: Ignoring "noauto" option for root device
	[  +0.161812] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.198143] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +1.781167] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +0.156496] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +0.177388] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.236202] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[ +13.355925] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.101439] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.609492] systemd-fstab-generator[1485]: Ignoring "noauto" option for root device
	[  +5.696633] systemd-fstab-generator[1749]: Ignoring "noauto" option for root device
	[  +0.089877] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.800805] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[  +0.132709] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.752220] systemd-fstab-generator[4361]: Ignoring "noauto" option for root device
	[  +0.250175] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.876504] kauditd_printk_skb: 51 callbacks suppressed
	[Mar 8 00:14] kauditd_printk_skb: 19 callbacks suppressed
	[Mar 8 00:16] hrtimer: interrupt took 972704 ns
	
	
	==> etcd [c0241fd304ad] <==
	{"level":"info","ts":"2024-03-08T00:13:33.133203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.48.212:2379"}
	{"level":"info","ts":"2024-03-08T00:13:33.138673Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T00:13:33.139015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T00:13:33.163506Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T00:13:59.652283Z","caller":"traceutil/trace.go:171","msg":"trace[241853249] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"134.325809ms","start":"2024-03-08T00:13:59.517938Z","end":"2024-03-08T00:13:59.652264Z","steps":["trace[241853249] 'process raft request'  (duration: 134.225508ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T00:14:16.471929Z","caller":"traceutil/trace.go:171","msg":"trace[1184339844] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"167.74918ms","start":"2024-03-08T00:14:16.304159Z","end":"2024-03-08T00:14:16.471908Z","steps":["trace[1184339844] 'process raft request'  (duration: 167.513278ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-08T00:14:19.204602Z","caller":"traceutil/trace.go:171","msg":"trace[861730606] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"102.701709ms","start":"2024-03-08T00:14:19.101881Z","end":"2024-03-08T00:14:19.204582Z","steps":["trace[861730606] 'process raft request'  (duration: 102.411307ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T00:15:07.810204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.488906ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14053920360553634070 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:497 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-08T00:15:07.811495Z","caller":"traceutil/trace.go:171","msg":"trace[848436151] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"133.651546ms","start":"2024-03-08T00:15:07.677829Z","end":"2024-03-08T00:15:07.81148Z","steps":["trace[848436151] 'process raft request'  (duration: 23.008828ms)","trace[848436151] 'compare'  (duration: 108.335205ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T00:16:28.013989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.248754ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14053920360553634370 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.20.48.212\" mod_revision:554 > success:<request_put:<key:\"/registry/masterleases/172.20.48.212\" value_size:66 lease:4830548323698858560 >> failure:<request_range:<key:\"/registry/masterleases/172.20.48.212\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-08T00:16:28.014265Z","caller":"traceutil/trace.go:171","msg":"trace[19679509] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"252.779498ms","start":"2024-03-08T00:16:27.761472Z","end":"2024-03-08T00:16:28.014251Z","steps":["trace[19679509] 'read index received'  (duration: 50.985142ms)","trace[19679509] 'applied index is now lower than readState.Index'  (duration: 201.793356ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T00:16:28.015059Z","caller":"traceutil/trace.go:171","msg":"trace[1339853423] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"348.25865ms","start":"2024-03-08T00:16:27.666732Z","end":"2024-03-08T00:16:28.01499Z","steps":["trace[1339853423] 'process raft request'  (duration: 145.853191ms)","trace[1339853423] 'compare'  (duration: 201.059653ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T00:16:28.016304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T00:16:27.666714Z","time spent":"349.546356ms","remote":"127.0.0.1:59058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.20.48.212\" mod_revision:554 > success:<request_put:<key:\"/registry/masterleases/172.20.48.212\" value_size:66 lease:4830548323698858560 >> failure:<request_range:<key:\"/registry/masterleases/172.20.48.212\" > >"}
	{"level":"warn","ts":"2024-03-08T00:16:28.015844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.149799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T00:16:28.017481Z","caller":"traceutil/trace.go:171","msg":"trace[1904190169] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:562; }","duration":"256.022313ms","start":"2024-03-08T00:16:27.761448Z","end":"2024-03-08T00:16:28.01747Z","steps":["trace[1904190169] 'agreement among raft nodes before linearized reading'  (duration: 253.005099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T00:16:43.963014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.390721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T00:16:43.963093Z","caller":"traceutil/trace.go:171","msg":"trace[1377580558] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:606; }","duration":"197.492222ms","start":"2024-03-08T00:16:43.765586Z","end":"2024-03-08T00:16:43.963078Z","steps":["trace[1377580558] 'range keys from in-memory index tree'  (duration: 197.26102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T00:16:48.969921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.021807ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14053920360553634582 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:615 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-08T00:16:48.970005Z","caller":"traceutil/trace.go:171","msg":"trace[2060995102] linearizableReadLoop","detail":"{readStateIndex:671; appliedIndex:670; }","duration":"271.815662ms","start":"2024-03-08T00:16:48.698177Z","end":"2024-03-08T00:16:48.969992Z","steps":["trace[2060995102] 'read index received'  (duration: 11.961755ms)","trace[2060995102] 'applied index is now lower than readState.Index'  (duration: 259.852907ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-08T00:16:48.970408Z","caller":"traceutil/trace.go:171","msg":"trace[520448204] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"388.493105ms","start":"2024-03-08T00:16:48.581868Z","end":"2024-03-08T00:16:48.970361Z","steps":["trace[520448204] 'process raft request'  (duration: 127.972995ms)","trace[520448204] 'compare'  (duration: 259.948407ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-08T00:16:48.970581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.418866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-08T00:16:48.97067Z","caller":"traceutil/trace.go:171","msg":"trace[1274121113] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:618; }","duration":"272.511966ms","start":"2024-03-08T00:16:48.69815Z","end":"2024-03-08T00:16:48.970662Z","steps":["trace[1274121113] 'agreement among raft nodes before linearized reading'  (duration: 272.281365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T00:16:48.970951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.220071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-08T00:16:48.971985Z","caller":"traceutil/trace.go:171","msg":"trace[1592984043] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:618; }","duration":"210.251976ms","start":"2024-03-08T00:16:48.761723Z","end":"2024-03-08T00:16:48.971975Z","steps":["trace[1592984043] 'agreement among raft nodes before linearized reading'  (duration: 209.201671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-08T00:16:48.970634Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-08T00:16:48.581858Z","time spent":"388.739206ms","remote":"127.0.0.1:59202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:615 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 00:18:05 up 6 min,  0 users,  load average: 0.50, 0.69, 0.37
	Linux multinode-397400 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [91ada1ebb521] <==
	I0308 00:17:01.051241       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:17:11.066548       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:17:11.066587       1 main.go:227] handling current node
	I0308 00:17:11.066599       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:17:11.066622       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:17:21.078483       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:17:21.078599       1 main.go:227] handling current node
	I0308 00:17:21.078615       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:17:21.078623       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:17:31.085408       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:17:31.085454       1 main.go:227] handling current node
	I0308 00:17:31.085465       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:17:31.085471       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:17:41.095896       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:17:41.096074       1 main.go:227] handling current node
	I0308 00:17:41.096108       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:17:41.096219       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:17:51.111350       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:17:51.111482       1 main.go:227] handling current node
	I0308 00:17:51.111498       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:17:51.111507       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:18:01.118175       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:18:01.118281       1 main.go:227] handling current node
	I0308 00:18:01.118295       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:18:01.118319       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [23ccdb1fc3b5] <==
	I0308 00:13:35.435556       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 00:13:35.435752       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 00:13:35.435877       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 00:13:35.437026       1 aggregator.go:166] initial CRD sync complete...
	I0308 00:13:35.437108       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 00:13:35.437127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 00:13:35.437243       1 cache.go:39] Caches are synced for autoregister controller
	I0308 00:13:35.439251       1 controller.go:624] quota admission added evaluator for: namespaces
	I0308 00:13:35.443747       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 00:13:35.511595       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 00:13:36.310591       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0308 00:13:36.317061       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0308 00:13:36.317313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0308 00:13:37.338334       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 00:13:37.469713       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0308 00:13:37.589249       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0308 00:13:37.602954       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.20.48.212]
	I0308 00:13:37.603894       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 00:13:37.615941       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 00:13:38.381521       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 00:13:39.285187       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 00:13:39.324275       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0308 00:13:39.341317       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 00:13:51.887715       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0308 00:13:51.990274       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4f8851b13458] <==
	I0308 00:13:52.902161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.2µs"
	I0308 00:14:03.748292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.401µs"
	I0308 00:14:03.781481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="153.401µs"
	I0308 00:14:05.199178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.001µs"
	I0308 00:14:06.239451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.829841ms"
	I0308 00:14:06.240833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.7µs"
	I0308 00:14:06.341128       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0308 00:16:35.208687       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m02\" does not exist"
	I0308 00:16:35.235153       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gw9w9"
	I0308 00:16:35.242709       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m02" podCIDRs=["10.244.1.0/24"]
	I0308 00:16:35.243824       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvzwq"
	I0308 00:16:36.367881       1 event.go:307] "Event occurred" object="multinode-397400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller"
	I0308 00:16:36.368085       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-397400-m02"
	I0308 00:16:53.048111       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:17:16.987262       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0308 00:17:17.020897       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-ctt42"
	I0308 00:17:17.043839       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-j7ck4"
	I0308 00:17:17.064259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="76.284548ms"
	I0308 00:17:17.099554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="35.193861ms"
	I0308 00:17:17.100450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="226.601µs"
	I0308 00:17:17.100878       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="216.801µs"
	I0308 00:17:19.910304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.499582ms"
	I0308 00:17:19.911349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.701µs"
	I0308 00:17:20.176000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.976786ms"
	I0308 00:17:20.176273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.1µs"
	
	
	==> kube-proxy [79433b5ca644] <==
	I0308 00:13:54.006048       1 server_others.go:69] "Using iptables proxy"
	I0308 00:13:54.040499       1 node.go:141] Successfully retrieved node IP: 172.20.48.212
	I0308 00:13:54.095908       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 00:13:54.096005       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 00:13:54.101982       1 server_others.go:152] "Using iptables Proxier"
	I0308 00:13:54.102091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 00:13:54.102846       1 server.go:846] "Version info" version="v1.28.4"
	I0308 00:13:54.102861       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:13:54.104235       1 config.go:315] "Starting node config controller"
	I0308 00:13:54.104569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 00:13:54.105241       1 config.go:97] "Starting endpoint slice config controller"
	I0308 00:13:54.106017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 00:13:54.106286       1 config.go:188] "Starting service config controller"
	I0308 00:13:54.106444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 00:13:54.205614       1 shared_informer.go:318] Caches are synced for node config
	I0308 00:13:54.206939       1 shared_informer.go:318] Caches are synced for service config
	I0308 00:13:54.206988       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0aaf57b801fb] <==
	W0308 00:13:36.474434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 00:13:36.474595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 00:13:36.477488       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 00:13:36.477702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 00:13:36.525082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 00:13:36.525124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 00:13:36.600953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 00:13:36.601042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 00:13:36.636085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 00:13:36.636109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 00:13:36.684531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 00:13:36.684579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 00:13:36.716028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.716307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.848521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 00:13:36.848602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 00:13:36.900721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 00:13:36.900908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 00:13:36.942519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 00:13:36.942753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 00:13:36.951164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.951329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.977745       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 00:13:36.977888       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0308 00:13:39.884202       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 00:14:05 multinode-397400 kubelet[2750]: I0308 00:14:05.229020    2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-w4hzh" podStartSLOduration=13.228970115 podCreationTimestamp="2024-03-08 00:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-08 00:14:05.200786002 +0000 UTC m=+25.959360022" watchObservedRunningTime="2024-03-08 00:14:05.228970115 +0000 UTC m=+25.987544235"
	Mar 08 00:14:06 multinode-397400 kubelet[2750]: I0308 00:14:06.219028    2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.218992159 podCreationTimestamp="2024-03-08 00:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-08 00:14:05.230576727 +0000 UTC m=+25.989150847" watchObservedRunningTime="2024-03-08 00:14:06.218992159 +0000 UTC m=+26.977566179"
	Mar 08 00:14:39 multinode-397400 kubelet[2750]: E0308 00:14:39.603097    2750 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:14:39 multinode-397400 kubelet[2750]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:14:39 multinode-397400 kubelet[2750]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:14:39 multinode-397400 kubelet[2750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:14:39 multinode-397400 kubelet[2750]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:15:39 multinode-397400 kubelet[2750]: E0308 00:15:39.603123    2750 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:15:39 multinode-397400 kubelet[2750]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:15:39 multinode-397400 kubelet[2750]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:15:39 multinode-397400 kubelet[2750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:15:39 multinode-397400 kubelet[2750]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:16:39 multinode-397400 kubelet[2750]: E0308 00:16:39.601007    2750 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:16:39 multinode-397400 kubelet[2750]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:16:39 multinode-397400 kubelet[2750]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:16:39 multinode-397400 kubelet[2750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:16:39 multinode-397400 kubelet[2750]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:17:17 multinode-397400 kubelet[2750]: I0308 00:17:17.056668    2750 topology_manager.go:215] "Topology Admit Handler" podUID="e51ca92b-a5ba-4a9e-b233-52c0647c767a" podNamespace="default" podName="busybox-5b5d89c9d6-j7ck4"
	Mar 08 00:17:17 multinode-397400 kubelet[2750]: I0308 00:17:17.169728    2750 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7x2h\" (UniqueName: \"kubernetes.io/projected/e51ca92b-a5ba-4a9e-b233-52c0647c767a-kube-api-access-x7x2h\") pod \"busybox-5b5d89c9d6-j7ck4\" (UID: \"e51ca92b-a5ba-4a9e-b233-52c0647c767a\") " pod="default/busybox-5b5d89c9d6-j7ck4"
	Mar 08 00:17:19 multinode-397400 kubelet[2750]: I0308 00:17:19.899597    2750 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-j7ck4" podStartSLOduration=1.850977872 podCreationTimestamp="2024-03-08 00:17:17 +0000 UTC" firstStartedPulling="2024-03-08 00:17:17.810695908 +0000 UTC m=+218.569270028" lastFinishedPulling="2024-03-08 00:17:18.85927825 +0000 UTC m=+219.617852270" observedRunningTime="2024-03-08 00:17:19.899201112 +0000 UTC m=+220.657775232" watchObservedRunningTime="2024-03-08 00:17:19.899560114 +0000 UTC m=+220.658134134"
	Mar 08 00:17:39 multinode-397400 kubelet[2750]: E0308 00:17:39.602816    2750 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:17:39 multinode-397400 kubelet[2750]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:17:39 multinode-397400 kubelet[2750]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:17:39 multinode-397400 kubelet[2750]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:17:39 multinode-397400 kubelet[2750]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:17:58.228550    5688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400: (10.729228s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-397400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (52.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (510.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-397400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-397400
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-397400: (1m28.809762s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-397400 --wait=true -v=8 --alsologtostderr
E0308 00:34:37.390169    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:34:58.890017    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0308 00:38:02.102341    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-397400 --wait=true -v=8 --alsologtostderr: (6m29.3098556s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-397400
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-397400	172.20.48.212
multinode-397400-m02	172.20.61.226
multinode-397400-m03	172.20.52.190

                                                
                                                
After restart: multinode-397400	172.20.61.151
multinode-397400-m02	172.20.50.67
multinode-397400-m03	172.20.53.127
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400: (10.7498281s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25: (7.8775013s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | multinode-397400:/home/docker/cp-test_multinode-397400-m02_multinode-397400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400 sudo cat                                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-397400-m02_multinode-397400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m03:/home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400-m03 sudo cat                                                                    | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp testdata\cp-test.txt                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400:/home/docker/cp-test_multinode-397400-m03_multinode-397400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400 sudo cat                                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-397400-m03_multinode-397400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:27 UTC |
	|         | multinode-397400-m02:/home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400-m02 sudo cat                                                                    | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	|         | /home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-397400 node stop m03                                                                                           | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	| node    | multinode-397400 node start                                                                                              | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:28 UTC | 08 Mar 24 00:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-397400                                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:31 UTC |                     |
	| stop    | -p multinode-397400                                                                                                      | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:31 UTC | 08 Mar 24 00:32 UTC |
	| start   | -p multinode-397400                                                                                                      | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:32 UTC | 08 Mar 24 00:39 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-397400                                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:39 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 00:32:37
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 00:32:37.922575    8176 out.go:291] Setting OutFile to fd 856 ...
	I0308 00:32:37.923670    8176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:32:37.923670    8176 out.go:304] Setting ErrFile to fd 864...
	I0308 00:32:37.923670    8176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:32:37.940351    8176 out.go:298] Setting JSON to false
	I0308 00:32:37.948587    8176 start.go:129] hostinfo: {"hostname":"minikube7","uptime":16912,"bootTime":1709841045,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0308 00:32:37.948587    8176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0308 00:32:37.977819    8176 out.go:177] * [multinode-397400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0308 00:32:38.038558    8176 notify.go:220] Checking for updates...
	I0308 00:32:38.155085    8176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:32:38.293202    8176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 00:32:38.361537    8176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0308 00:32:38.493209    8176 out.go:177]   - MINIKUBE_LOCATION=16214
	I0308 00:32:38.646925    8176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 00:32:38.713059    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:32:38.713153    8176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 00:32:43.604897    8176 out.go:177] * Using the hyperv driver based on existing profile
	I0308 00:32:43.656589    8176 start.go:297] selected driver: hyperv
	I0308 00:32:43.656589    8176 start.go:901] validating driver "hyperv" against &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:32:43.656589    8176 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 00:32:43.705141    8176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:32:43.705141    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:32:43.705141    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:32:43.705141    8176 start.go:340] cluster config:
	{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:32:43.705863    8176 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 00:32:43.800164    8176 out.go:177] * Starting "multinode-397400" primary control-plane node in "multinode-397400" cluster
	I0308 00:32:43.934219    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:32:43.943525    8176 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0308 00:32:43.943630    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:32:43.943783    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:32:43.943783    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:32:43.944375    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:32:43.947511    8176 start.go:360] acquireMachinesLock for multinode-397400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:32:43.947511    8176 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-397400"
	I0308 00:32:43.948034    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:32:43.948034    8176 fix.go:54] fixHost starting: 
	I0308 00:32:43.948548    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:46.285957    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:32:46.295664    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:46.295664    8176 fix.go:112] recreateIfNeeded on multinode-397400: state=Stopped err=<nil>
	W0308 00:32:46.295834    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:32:46.387487    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400" ...
	I0308 00:32:46.550249    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400
	I0308 00:32:50.753141    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:32:50.756220    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:50.756220    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:32:50.756280    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:32:54.887593    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:32:54.887593    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:55.900196    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:57.841836    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:32:57.844621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:57.844621    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:00.050483    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:00.050483    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:01.057910    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:03.042148    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:03.042148    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:03.048515    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:05.301315    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:05.301315    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:06.312529    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:08.216151    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:08.216771    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:08.216836    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:10.457362    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:10.457663    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:11.461065    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:13.333521    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:13.344076    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:13.344076    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:15.483825    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:15.493732    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:15.496624    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:17.278328    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:17.278328    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:17.288440    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:19.427467    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:19.427467    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:19.437446    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:33:19.439554    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:33:19.439554    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:21.228418    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:21.228471    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:21.228471    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:23.355186    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:23.355305    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:23.362080    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:23.362716    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:23.362716    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:33:23.491398    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:33:23.491398    8176 buildroot.go:166] provisioning hostname "multinode-397400"
	I0308 00:33:23.491398    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:25.290309    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:25.300470    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:25.300470    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:27.432491    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:27.432491    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:27.437646    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:27.438255    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:27.438255    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400 && echo "multinode-397400" | sudo tee /etc/hostname
	I0308 00:33:27.588273    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400
	
	I0308 00:33:27.588273    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:29.399818    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:29.400611    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:29.400689    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:31.530375    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:31.541143    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:31.545761    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:31.546324    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:31.546324    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:33:31.690358    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:33:31.690415    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:33:31.690415    8176 buildroot.go:174] setting up certificates
	I0308 00:33:31.690415    8176 provision.go:84] configureAuth start
	I0308 00:33:31.690415    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:33.423927    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:33.433716    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:33.433811    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:35.566216    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:35.576436    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:35.576561    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:37.349043    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:37.349196    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:37.349196    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:39.455536    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:39.455621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:39.455621    8176 provision.go:143] copyHostCerts
	I0308 00:33:39.455621    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:33:39.455621    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:33:39.455621    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:33:39.456220    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:33:39.457731    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:33:39.457731    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:33:39.457731    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:33:39.458440    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:33:39.459142    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:33:39.459142    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:33:39.459664    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:33:39.459791    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:33:39.460597    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400 san=[127.0.0.1 172.20.61.151 localhost minikube multinode-397400]
	I0308 00:33:39.570202    8176 provision.go:177] copyRemoteCerts
	I0308 00:33:39.581233    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:33:39.581233    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:41.418642    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:41.429092    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:41.429144    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:43.543732    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:43.543820    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:43.543877    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:33:43.646957    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.0655666s)
	I0308 00:33:43.646957    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:33:43.647433    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0308 00:33:43.683271    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:33:43.683271    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 00:33:43.709023    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:33:43.721242    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:33:43.755935    8176 provision.go:87] duration metric: took 12.0654072s to configureAuth
	I0308 00:33:43.756031    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:33:43.756111    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:33:43.756779    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:45.511498    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:45.511498    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:45.521394    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:47.611109    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:47.611109    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:47.626815    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:47.626815    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:47.626815    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:33:47.772625    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:33:47.772625    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:33:47.772625    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:33:47.772625    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:49.557878    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:49.558061    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:49.558166    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:51.715190    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:51.715190    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:51.720196    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:51.720500    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:51.720500    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:33:51.870543    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:33:51.870613    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:53.616895    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:53.616895    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:53.626040    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:55.743110    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:55.743110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:55.758414    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:55.758414    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:55.758414    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:33:57.122646    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:33:57.122731    8176 machine.go:97] duration metric: took 37.6828228s to provisionDockerMachine
	I0308 00:33:57.122731    8176 start.go:293] postStartSetup for "multinode-397400" (driver="hyperv")
	I0308 00:33:57.122790    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:33:57.134981    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:33:57.134981    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:58.921440    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:58.932707    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:58.932707    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:01.097456    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:01.097555    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:01.098005    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:01.201813    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0667175s)
	I0308 00:34:01.213098    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:34:01.219632    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:34:01.219843    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:34:01.219843    8176 command_runner.go:130] > ID=buildroot
	I0308 00:34:01.219843    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:34:01.219843    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:34:01.219843    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:34:01.220035    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:34:01.220232    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:34:01.221289    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:34:01.221289    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:34:01.230383    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:34:01.246802    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:34:01.285332    8176 start.go:296] duration metric: took 4.1625616s for postStartSetup
	I0308 00:34:01.285455    8176 fix.go:56] duration metric: took 1m17.3366942s for fixHost
	I0308 00:34:01.285575    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:03.060818    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:03.060818    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:03.061056    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:05.190060    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:05.190060    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:05.204399    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:34:05.205187    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:34:05.205187    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:34:05.329715    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858045.344483499
	
	I0308 00:34:05.329715    8176 fix.go:216] guest clock: 1709858045.344483499
	I0308 00:34:05.329715    8176 fix.go:229] Guest: 2024-03-08 00:34:05.344483499 +0000 UTC Remote: 2024-03-08 00:34:01.2854885 +0000 UTC m=+83.527335301 (delta=4.058994999s)
	I0308 00:34:05.329715    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:07.103747    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:07.103897    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:07.104033    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:09.235729    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:09.235729    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:09.242817    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:34:09.243395    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:34:09.243395    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858045
	I0308 00:34:09.382751    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:34:05 UTC 2024
	
	I0308 00:34:09.382751    8176 fix.go:236] clock set: Fri Mar  8 00:34:05 UTC 2024
	 (err=<nil>)
	I0308 00:34:09.382751    8176 start.go:83] releasing machines lock for "multinode-397400", held for 1m25.4344373s
	I0308 00:34:09.382751    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:13.336909    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:13.336909    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:13.348150    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:34:13.348341    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:13.354436    8176 ssh_runner.go:195] Run: cat /version.json
	I0308 00:34:13.354436    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:15.248403    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:15.258797    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:15.258895    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:17.512760    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:17.512760    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:17.512760    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:17.537506    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:17.537506    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:17.537506    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:17.679025    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:34:17.679919    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.3317286s)
	I0308 00:34:17.679919    8176 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0308 00:34:17.679919    8176 ssh_runner.go:235] Completed: cat /version.json: (4.325443s)
	I0308 00:34:17.689917    8176 ssh_runner.go:195] Run: systemctl --version
	I0308 00:34:17.698365    8176 command_runner.go:130] > systemd 252 (252)
	I0308 00:34:17.698497    8176 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0308 00:34:17.707859    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:34:17.710963    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0308 00:34:17.710963    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:34:17.716653    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:34:17.749040    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:34:17.749144    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:34:17.749204    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:34:17.749435    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:34:17.776322    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:34:17.785683    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:34:17.815765    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:34:17.830326    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:34:17.840256    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:34:17.869029    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:34:17.895075    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:34:17.921271    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:34:17.948781    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:34:17.975675    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:34:18.001988    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:34:18.017558    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:34:18.027600    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:34:18.059131    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:18.229672    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:34:18.257329    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:34:18.269538    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:34:18.290293    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:34:18.290365    8176 command_runner.go:130] > [Unit]
	I0308 00:34:18.290365    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:34:18.290365    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:34:18.290365    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:34:18.290365    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:34:18.290365    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:34:18.290365    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:34:18.290365    8176 command_runner.go:130] > [Service]
	I0308 00:34:18.290365    8176 command_runner.go:130] > Type=notify
	I0308 00:34:18.290365    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:34:18.290486    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:34:18.290486    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:34:18.290544    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:34:18.290544    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:34:18.290591    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:34:18.290591    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:34:18.290591    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:34:18.290672    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:34:18.290672    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:34:18.290672    8176 command_runner.go:130] > ExecStart=
	I0308 00:34:18.290733    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:34:18.290733    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:34:18.290733    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:34:18.290802    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:34:18.290859    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:34:18.290859    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:34:18.290859    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:34:18.290910    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:34:18.290910    8176 command_runner.go:130] > Delegate=yes
	I0308 00:34:18.290910    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:34:18.290910    8176 command_runner.go:130] > KillMode=process
	I0308 00:34:18.290910    8176 command_runner.go:130] > [Install]
	I0308 00:34:18.290966    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:34:18.302551    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:34:18.332913    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:34:18.371916    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:34:18.404693    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:34:18.436130    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:34:18.489704    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:34:18.508796    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:34:18.538978    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:34:18.549101    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:34:18.552340    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:34:18.567191    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:34:18.580746    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:34:18.615682    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:34:18.779942    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:34:18.917784    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:34:18.917784    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:34:18.957542    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:19.119895    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:34:20.749781    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.629693s)
	I0308 00:34:20.761426    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:34:20.794002    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:34:20.825718    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:34:20.986564    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:34:21.141633    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:21.311006    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:34:21.345815    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:34:21.375961    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:21.525964    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:34:21.601972    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:34:21.615849    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:34:21.622715    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:34:21.623274    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:34:21.623310    8176 command_runner.go:130] > Device: 0,22	Inode: 844         Links: 1
	I0308 00:34:21.623310    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:34:21.623372    8176 command_runner.go:130] > Access: 2024-03-08 00:34:21.560107641 +0000
	I0308 00:34:21.623402    8176 command_runner.go:130] > Modify: 2024-03-08 00:34:21.560107641 +0000
	I0308 00:34:21.623430    8176 command_runner.go:130] > Change: 2024-03-08 00:34:21.563107655 +0000
	I0308 00:34:21.623430    8176 command_runner.go:130] >  Birth: -
	I0308 00:34:21.623595    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:34:21.634447    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:34:21.639395    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:34:21.644822    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:34:21.710147    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:34:21.710266    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:34:21.719696    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:34:21.746198    8176 command_runner.go:130] > 24.0.7
	I0308 00:34:21.755767    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:34:21.785463    8176 command_runner.go:130] > 24.0.7
	I0308 00:34:21.789739    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:34:21.790039    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:34:21.797147    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:34:21.797147    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:34:21.805391    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:34:21.808037    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:34:21.831308    8176 kubeadm.go:877] updating cluster {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 00:34:21.831610    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:34:21.839603    8176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:34:21.862087    8176 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0308 00:34:21.863024    8176 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0308 00:34:21.863096    8176 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0308 00:34:21.863127    8176 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0308 00:34:21.863166    8176 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0308 00:34:21.863201    8176 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:34:21.863201    8176 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0308 00:34:21.863273    8176 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0308 00:34:21.863273    8176 docker.go:615] Images already preloaded, skipping extraction
	I0308 00:34:21.872482    8176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:34:21.890235    8176 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0308 00:34:21.890235    8176 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:34:21.890235    8176 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0308 00:34:21.890235    8176 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0308 00:34:21.896126    8176 cache_images.go:84] Images are preloaded, skipping loading
	I0308 00:34:21.896169    8176 kubeadm.go:928] updating node { 172.20.61.151 8443 v1.28.4 docker true true} ...
	I0308 00:34:21.896404    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:34:21.904400    8176 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 00:34:21.936039    8176 command_runner.go:130] > cgroupfs
	I0308 00:34:21.937545    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:34:21.937619    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:34:21.937660    8176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 00:34:21.937692    8176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.61.151 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-397400 NodeName:multinode-397400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 00:34:21.938070    8176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-397400"
	  kubeletExtraArgs:
	    node-ip: 172.20.61.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 00:34:21.949317    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:34:21.968593    8176 command_runner.go:130] > kubeadm
	I0308 00:34:21.968632    8176 command_runner.go:130] > kubectl
	I0308 00:34:21.968687    8176 command_runner.go:130] > kubelet
	I0308 00:34:21.968740    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:34:21.978953    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 00:34:21.981940    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0308 00:34:22.023085    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:34:22.051693    8176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0308 00:34:22.089122    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:34:22.096189    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:34:22.130444    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:22.308597    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:34:22.334501    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.61.151
	I0308 00:34:22.334501    8176 certs.go:194] generating shared ca certs ...
	I0308 00:34:22.334576    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.335311    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:34:22.335843    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:34:22.336190    8176 certs.go:256] generating profile certs ...
	I0308 00:34:22.337057    8176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.key
	I0308 00:34:22.337270    8176 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808
	I0308 00:34:22.337421    8176 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.61.151]
	I0308 00:34:22.587111    8176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 ...
	I0308 00:34:22.587111    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808: {Name:mk4ff76114cc45ed80b018d6c5c6b8ce527e0f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.590417    8176 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808 ...
	I0308 00:34:22.590417    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808: {Name:mk785c22b94ac52191b29ae5556f426c124b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.592097    8176 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt
	I0308 00:34:22.597901    8176 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key
	I0308 00:34:22.604903    8176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key
	I0308 00:34:22.604903    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:34:22.606084    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:34:22.606240    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:34:22.606400    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:34:22.606565    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 00:34:22.606622    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 00:34:22.606905    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 00:34:22.607135    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 00:34:22.607381    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:34:22.607381    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:34:22.607977    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:34:22.608409    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:34:22.608721    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:34:22.608879    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:34:22.608879    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:34:22.609501    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:22.609837    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:34:22.609837    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:34:22.610739    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:34:22.656857    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:34:22.697992    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:34:22.734810    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:34:22.783697    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 00:34:22.823229    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 00:34:22.864862    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 00:34:22.909778    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 00:34:22.949951    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:34:22.987859    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:34:23.023596    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:34:23.065497    8176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 00:34:23.101133    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:34:23.109410    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:34:23.118886    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:34:23.147351    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.150322    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.150322    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.155717    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.172927    8176 command_runner.go:130] > b5213941
	I0308 00:34:23.184619    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:34:23.212501    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:34:23.239151    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.243661    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.245251    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.255109    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.257956    8176 command_runner.go:130] > 51391683
	I0308 00:34:23.272509    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:34:23.300626    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:34:23.326819    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.333991    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.334068    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.343448    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.351995    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:34:23.364707    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:34:23.392844    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:34:23.398920    8176 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:34:23.398920    8176 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0308 00:34:23.398920    8176 command_runner.go:130] > Device: 8,1	Inode: 1053989     Links: 1
	I0308 00:34:23.398920    8176 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 00:34:23.398920    8176 command_runner.go:130] > Access: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.398920    8176 command_runner.go:130] > Modify: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.399097    8176 command_runner.go:130] > Change: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.399097    8176 command_runner.go:130] >  Birth: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.409065    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 00:34:23.418273    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.428091    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 00:34:23.432951    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.446895    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 00:34:23.455308    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.464761    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 00:34:23.474341    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.485334    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 00:34:23.493471    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.505480    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 00:34:23.509885    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.514245    8176 kubeadm.go:391] StartCluster: {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:34:23.523376    8176 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 00:34:23.554863    8176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/minikube/etcd:
	I0308 00:34:23.564221    8176 command_runner.go:130] > member
	W0308 00:34:23.571303    8176 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 00:34:23.571339    8176 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 00:34:23.571339    8176 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 00:34:23.582452    8176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 00:34:23.598944    8176 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:34:23.599884    8176 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-397400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:23.600631    8176 kubeconfig.go:62] C:\Users\jenkins.minikube7\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-397400" cluster setting kubeconfig missing "multinode-397400" context setting]
	I0308 00:34:23.601507    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:23.614908    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:23.615514    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:34:23.616221    8176 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 00:34:23.620485    8176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 00:34:23.640039    8176 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0308 00:34:23.640127    8176 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0308 00:34:23.640127    8176 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0308 00:34:23.640127    8176 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0308 00:34:23.640127    8176 command_runner.go:130] >  kind: InitConfiguration
	I0308 00:34:23.640161    8176 command_runner.go:130] >  localAPIEndpoint:
	I0308 00:34:23.640161    8176 command_runner.go:130] > -  advertiseAddress: 172.20.48.212
	I0308 00:34:23.640161    8176 command_runner.go:130] > +  advertiseAddress: 172.20.61.151
	I0308 00:34:23.640161    8176 command_runner.go:130] >    bindPort: 8443
	I0308 00:34:23.640214    8176 command_runner.go:130] >  bootstrapTokens:
	I0308 00:34:23.640214    8176 command_runner.go:130] >    - groups:
	I0308 00:34:23.640214    8176 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0308 00:34:23.640253    8176 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0308 00:34:23.640253    8176 command_runner.go:130] >    name: "multinode-397400"
	I0308 00:34:23.640291    8176 command_runner.go:130] >    kubeletExtraArgs:
	I0308 00:34:23.640291    8176 command_runner.go:130] > -    node-ip: 172.20.48.212
	I0308 00:34:23.640318    8176 command_runner.go:130] > +    node-ip: 172.20.61.151
	I0308 00:34:23.640318    8176 command_runner.go:130] >    taints: []
	I0308 00:34:23.640318    8176 command_runner.go:130] >  ---
	I0308 00:34:23.640352    8176 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0308 00:34:23.640352    8176 command_runner.go:130] >  kind: ClusterConfiguration
	I0308 00:34:23.640391    8176 command_runner.go:130] >  apiServer:
	I0308 00:34:23.640458    8176 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.20.48.212"]
	I0308 00:34:23.640518    8176 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	I0308 00:34:23.640540    8176 command_runner.go:130] >    extraArgs:
	I0308 00:34:23.640540    8176 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0308 00:34:23.640540    8176 command_runner.go:130] >  controllerManager:
	I0308 00:34:23.640662    8176 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.48.212
	+  advertiseAddress: 172.20.61.151
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-397400"
	   kubeletExtraArgs:
	-    node-ip: 172.20.48.212
	+    node-ip: 172.20.61.151
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.48.212"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0308 00:34:23.640712    8176 kubeadm.go:1153] stopping kube-system containers ...
	I0308 00:34:23.648657    8176 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 00:34:23.671840    8176 command_runner.go:130] > b8903699a2e3
	I0308 00:34:23.671840    8176 command_runner.go:130] > 84e1da671abd
	I0308 00:34:23.671840    8176 command_runner.go:130] > 13e6ea5ce4bd
	I0308 00:34:23.671840    8176 command_runner.go:130] > fdffd4f1db96
	I0308 00:34:23.671840    8176 command_runner.go:130] > 91ada1ebb521
	I0308 00:34:23.671840    8176 command_runner.go:130] > 79433b5ca644
	I0308 00:34:23.671840    8176 command_runner.go:130] > 9c957cee5d35
	I0308 00:34:23.671840    8176 command_runner.go:130] > 90ba9a9d99a3
	I0308 00:34:23.671840    8176 command_runner.go:130] > 0aaf57b801fb
	I0308 00:34:23.672952    8176 command_runner.go:130] > 4f8851b13458
	I0308 00:34:23.672952    8176 command_runner.go:130] > 23ccdb1fc3b5
	I0308 00:34:23.672952    8176 command_runner.go:130] > c0241fd304ad
	I0308 00:34:23.672952    8176 command_runner.go:130] > d4b57713d431
	I0308 00:34:23.672952    8176 command_runner.go:130] > ead2ed31c6b3
	I0308 00:34:23.672952    8176 command_runner.go:130] > 6b6ed8345b8f
	I0308 00:34:23.672952    8176 command_runner.go:130] > 45fec6e97f7a
	I0308 00:34:23.673034    8176 docker.go:483] Stopping containers: [b8903699a2e3 84e1da671abd 13e6ea5ce4bd fdffd4f1db96 91ada1ebb521 79433b5ca644 9c957cee5d35 90ba9a9d99a3 0aaf57b801fb 4f8851b13458 23ccdb1fc3b5 c0241fd304ad d4b57713d431 ead2ed31c6b3 6b6ed8345b8f 45fec6e97f7a]
	I0308 00:34:23.681325    8176 ssh_runner.go:195] Run: docker stop b8903699a2e3 84e1da671abd 13e6ea5ce4bd fdffd4f1db96 91ada1ebb521 79433b5ca644 9c957cee5d35 90ba9a9d99a3 0aaf57b801fb 4f8851b13458 23ccdb1fc3b5 c0241fd304ad d4b57713d431 ead2ed31c6b3 6b6ed8345b8f 45fec6e97f7a
	I0308 00:34:23.702772    8176 command_runner.go:130] > b8903699a2e3
	I0308 00:34:23.702772    8176 command_runner.go:130] > 84e1da671abd
	I0308 00:34:23.702772    8176 command_runner.go:130] > 13e6ea5ce4bd
	I0308 00:34:23.702772    8176 command_runner.go:130] > fdffd4f1db96
	I0308 00:34:23.702772    8176 command_runner.go:130] > 91ada1ebb521
	I0308 00:34:23.702772    8176 command_runner.go:130] > 79433b5ca644
	I0308 00:34:23.703691    8176 command_runner.go:130] > 9c957cee5d35
	I0308 00:34:23.703691    8176 command_runner.go:130] > 90ba9a9d99a3
	I0308 00:34:23.703691    8176 command_runner.go:130] > 0aaf57b801fb
	I0308 00:34:23.703691    8176 command_runner.go:130] > 4f8851b13458
	I0308 00:34:23.703738    8176 command_runner.go:130] > 23ccdb1fc3b5
	I0308 00:34:23.703738    8176 command_runner.go:130] > c0241fd304ad
	I0308 00:34:23.703738    8176 command_runner.go:130] > d4b57713d431
	I0308 00:34:23.703766    8176 command_runner.go:130] > ead2ed31c6b3
	I0308 00:34:23.703766    8176 command_runner.go:130] > 6b6ed8345b8f
	I0308 00:34:23.703766    8176 command_runner.go:130] > 45fec6e97f7a
	I0308 00:34:23.713740    8176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 00:34:23.746320    8176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:34:23.762788    8176 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:34:23.762788    8176 kubeadm.go:156] found existing configuration files:
	
	I0308 00:34:23.772541    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 00:34:23.791099    8176 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:34:23.791655    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:34:23.804267    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 00:34:23.832008    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 00:34:23.834172    8176 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:34:23.846253    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:34:23.857107    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 00:34:23.883304    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 00:34:23.885109    8176 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:34:23.897616    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:34:23.909113    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 00:34:23.933201    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 00:34:23.947101    8176 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:34:23.948205    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:34:23.957739    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 00:34:23.984281    8176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 00:34:23.991048    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:24.374711    8176 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 00:34:24.374985    8176 command_runner.go:130] > [certs] Using the existing "sa" key
	I0308 00:34:24.374985    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 00:34:25.667520    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2924123s)
	I0308 00:34:25.667520    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:25.931130    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:34:25.931203    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:34:25.931203    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:34:25.931203    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:26.011234    8176 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 00:34:26.011315    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 00:34:26.011340    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 00:34:26.011340    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 00:34:26.011340    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:26.093351    8176 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 00:34:26.093351    8176 api_server.go:52] waiting for apiserver process to appear ...
	I0308 00:34:26.108415    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:26.608062    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:27.113366    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:27.617247    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:28.124625    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:28.147840    8176 command_runner.go:130] > 1978
	I0308 00:34:28.147963    8176 api_server.go:72] duration metric: took 2.0544693s to wait for apiserver process to appear ...
	I0308 00:34:28.147963    8176 api_server.go:88] waiting for apiserver healthz status ...
	I0308 00:34:28.148046    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.362412    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 00:34:31.363189    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 00:34:31.363189    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.376701    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 00:34:31.376701    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 00:34:31.651716    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.659605    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:31.659695    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:32.162623    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:32.171509    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:32.173978    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:32.662837    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:32.673538    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:32.673538    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:33.150866    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:33.157951    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:34:33.159228    8176 round_trippers.go:463] GET https://172.20.61.151:8443/version
	I0308 00:34:33.159228    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:33.159900    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:33.160159    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:33.172576    8176 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0308 00:34:33.172576    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:33.172576    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Content-Length: 264
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:33 GMT
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Audit-Id: 60fc7eeb-b43b-4f01-bfbc-cea30b7a483f
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:33.172576    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:33.172576    8176 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0308 00:34:33.173105    8176 api_server.go:141] control plane version: v1.28.4
	I0308 00:34:33.173145    8176 api_server.go:131] duration metric: took 5.0250797s to wait for apiserver health ...
	I0308 00:34:33.173145    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:34:33.173145    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:34:33.176778    8176 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 00:34:33.187469    8176 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 00:34:33.199410    8176 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0308 00:34:33.199410    8176 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0308 00:34:33.199574    8176 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0308 00:34:33.199574    8176 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 00:34:33.199574    8176 command_runner.go:130] > Access: 2024-03-08 00:33:11.768939300 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] > Change: 2024-03-08 00:33:04.561000000 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] >  Birth: -
	I0308 00:34:33.199697    8176 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 00:34:33.199817    8176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 00:34:33.274756    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 00:34:34.710226    8176 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > daemonset.apps/kindnet configured
	I0308 00:34:34.710226    8176 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4354563s)
	I0308 00:34:34.710378    8176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 00:34:34.710537    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:34.710537    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:34.710537    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:34.710537    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:34.716013    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:34.716013    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:34.716013    8176 round_trippers.go:580]     Audit-Id: 52247c24-8834-4cf4-b37c-8c0ce7c91443
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:34.716132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:34.716132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:34 GMT
	I0308 00:34:34.717885    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1675"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0308 00:34:34.723925    8176 system_pods.go:59] 12 kube-system pods found
	I0308 00:34:34.723925    8176 system_pods.go:61] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 00:34:34.723925    8176 system_pods.go:61] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 00:34:34.723925    8176 system_pods.go:61] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:34.723925    8176 system_pods.go:61] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 00:34:34.724514    8176 system_pods.go:61] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:34.724514    8176 system_pods.go:74] duration metric: took 14.1356ms to wait for pod list to return data ...
	I0308 00:34:34.724674    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:34:34.724745    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:34.724745    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:34.724822    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:34.724822    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:34.729633    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:34.729633    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Audit-Id: acc66f97-d700-4597-b9a2-56dd30e8cf5f
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:34.729633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:34.729633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:34 GMT
	I0308 00:34:34.729633    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1675"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15627 chars]
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:105] duration metric: took 7.0654ms to run NodePressure ...
	I0308 00:34:34.731740    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:34.937543    8176 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0308 00:34:35.027195    8176 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0308 00:34:35.033852    8176 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 00:34:35.033852    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0308 00:34:35.033852    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.033852    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.033852    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.035047    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.040355    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.040355    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.040355    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Audit-Id: cd3c4c1d-17f8-421d-81e6-9e92807958bc
	I0308 00:34:35.040441    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.041576    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1677"},"items":[{"metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0308 00:34:35.043330    8176 kubeadm.go:733] kubelet initialised
	I0308 00:34:35.043879    8176 kubeadm.go:734] duration metric: took 10.0269ms waiting for restarted kubelet to initialise ...
	I0308 00:34:35.043879    8176 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:35.043963    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:35.043963    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.043963    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.043963    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.044672    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.044672    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Audit-Id: 719d8539-a467-474c-ae8c-25d50be24139
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.044672    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.044672    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.051863    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1677"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0308 00:34:35.055426    8176 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.055604    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:35.055665    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.055665    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.055724    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.056365    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.058439    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Audit-Id: c38158d2-38a1-433f-9fa4-a53016d9da4c
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.058439    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.058439    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.058663    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:35.059199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.059398    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.059398    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.059398    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.061090    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.063838    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.063838    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.063838    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Audit-Id: a6744353-cedb-40e9-84aa-d68fa601f24f
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.064459    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.064989    8176 pod_ready.go:97] node "multinode-397400" hosting pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.065061    8176 pod_ready.go:81] duration metric: took 9.6351ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.065061    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.065061    8176 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.065208    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:35.065266    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.065302    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.065302    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.066657    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.068533    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.068533    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.068533    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.068617    8176 round_trippers.go:580]     Audit-Id: c370e46e-a467-4f89-a1d3-c8d6f1e86730
	I0308 00:34:35.068651    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.068651    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.068651    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.068651    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:35.069262    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.069299    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.069333    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.069333    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.070786    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.070786    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Audit-Id: f8607540-177a-4139-8f3f-d2c38fad033a
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.073122    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.073122    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.073122    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.073348    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.073348    8176 pod_ready.go:97] node "multinode-397400" hosting pod "etcd-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.073348    8176 pod_ready.go:81] duration metric: took 8.2863ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.073348    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "etcd-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.073959    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.074121    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:34:35.074121    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.074121    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.074121    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.074820    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.077342    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.077342    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Audit-Id: 5415cd52-4dfa-414e-9e2b-d56f89784c33
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.077342    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.077342    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1666","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0308 00:34:35.078351    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.078427    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.078427    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.078427    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.081722    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:35.081795    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.081795    8176 round_trippers.go:580]     Audit-Id: 3cb4eb36-6a7a-4d09-9c32-fc599bad85f1
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.081824    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.081824    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.081824    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.082568    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-apiserver-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.082568    8176 pod_ready.go:81] duration metric: took 8.6094ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.082568    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-apiserver-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.082777    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.082857    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:35.082914    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.082914    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.082914    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.083241    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.083241    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Audit-Id: 3e3395b1-8a68-497e-9674-80ac6e22600b
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.083241    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.083241    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.086303    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1663","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0308 00:34:35.123199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.123199    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.123199    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.123199    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.123771    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.126479    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.126479    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.126479    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Audit-Id: 010a1783-976e-43b1-90c5-f417f8372e44
	I0308 00:34:35.126871    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.127072    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-controller-manager-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.127072    8176 pod_ready.go:81] duration metric: took 44.2943ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.127072    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-controller-manager-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.127072    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.313888    8176 request.go:629] Waited for 186.5688ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:35.314252    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:35.314252    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.314252    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.314252    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.320191    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:35.320191    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Audit-Id: 4120baf2-01d1-45b6-8822-9924e9fa4d3f
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.320191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.320191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.320753    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"610","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0308 00:34:35.514517    8176 request.go:629] Waited for 192.9884ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:35.514707    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:35.514773    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.514773    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.514773    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.515517    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.518515    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Audit-Id: 59ee09d1-e9f4-43e9-bfbc-ddef6e505913
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.518515    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.518515    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.518847    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"1341","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0308 00:34:35.519085    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:35.519085    8176 pod_ready.go:81] duration metric: took 392.0095ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.519085    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.719894    8176 request.go:629] Waited for 200.6268ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:35.720119    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:35.720119    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.720119    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.720119    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.720452    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.723598    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.723598    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Audit-Id: 5c96f248-9e15-42cc-9cd8-bad90a5434a6
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.723598    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.724064    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:34:35.914237    8176 request.go:629] Waited for 189.5417ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:35.914410    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:35.914488    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.914488    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.914488    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.916223    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.918357    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.918397    8176 round_trippers.go:580]     Audit-Id: cb964b14-5978-4fa9-ab7a-95c79cb1fb8e
	I0308 00:34:35.918397    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.918423    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.918423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.918423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.918423    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.918423    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1638","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:34:35.919170    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:35.919254    8176 pod_ready.go:81] duration metric: took 400.1655ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.919276    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:35.919276    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:36.111506    8176 request.go:629] Waited for 192.0592ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:36.111600    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:36.111600    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.111600    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.111600    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.112018    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.112018    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Audit-Id: 4032cbba-7e6b-406c-9472-b2e285bf591c
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.112018    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.112018    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.115363    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:34:36.333175    8176 request.go:629] Waited for 217.0681ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.333474    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.333474    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.333474    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.333474    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.333897    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.333897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.333897    8176 round_trippers.go:580]     Audit-Id: f6ae0c0a-2574-41fc-b050-b1ddda1ef2fa
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.337423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.337423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.337846    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:36.337892    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-proxy-nt8td" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:36.337892    8176 pod_ready.go:81] duration metric: took 418.6121ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:36.337892    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-proxy-nt8td" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:36.337892    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:36.518830    8176 request.go:629] Waited for 180.121ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.518996    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.518996    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.518996    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.519313    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.526206    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:36.526256    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Audit-Id: 7f1f98e1-44e3-4521-a98f-dfd96f558fa0
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.526317    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.526317    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.526317    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.527136    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:36.712513    8176 request.go:629] Waited for 184.4755ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.712662    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.712662    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.712662    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.712662    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.727917    8176 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0308 00:34:36.727917    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.727917    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.727917    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Audit-Id: da952290-7b8b-4f73-bfb0-16265f768b76
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.727917    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:36.918218    8176 request.go:629] Waited for 78.5538ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.918218    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.918337    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.918337    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.918337    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.918508    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.921739    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.921739    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Audit-Id: 448109be-1fb7-460e-a9e9-844fb9065fac
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.921739    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.922257    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.111389    8176 request.go:629] Waited for 188.1919ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.111389    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.111389    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.111389    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.111389    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.117131    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:37.117131    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Audit-Id: e466edb8-ea88-4faf-8b6b-47cd8ac0a254
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.117131    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.117131    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.117131    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:37.352876    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:37.352876    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.352876    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.352876    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.353408    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.353408    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.353408    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.357253    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.357253    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Audit-Id: 51ea9908-3ab9-40fb-ac6a-0ec37b8a19c8
	I0308 00:34:37.357343    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.514137    8176 request.go:629] Waited for 155.8959ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.514137    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.514137    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.514137    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.514137    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.514564    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.514564    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.514564    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.514564    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Audit-Id: a9927ced-c55b-48f0-8490-180fa2ae4476
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.517748    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:37.847981    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:37.847981    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.847981    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.847981    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.853897    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:37.853897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.853897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.853897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Audit-Id: 39bdac16-ea6a-4cb1-87ac-a5351f1a1541
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.854636    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.914886    8176 request.go:629] Waited for 60.098ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.915267    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.915267    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.915372    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.915372    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.916096    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.916096    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Audit-Id: a2a24143-a6fb-4b0d-9440-a9d644397789
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.918654    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.918654    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.918730    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:38.344199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:38.344199    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.344199    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.344199    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.344761    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:38.344761    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Audit-Id: ba7a8944-e158-4a12-9fbe-8e159da83b77
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.344761    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.344761    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.352976    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:38.353651    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:38.353717    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.353717    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.353717    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.360311    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:38.360311    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.360351    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.360351    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Audit-Id: 468fc0ca-462f-41f2-a05b-b308cee31053
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.360379    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.360379    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:38.361078    8176 pod_ready.go:102] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:38.849927    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:38.850005    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.850005    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.850005    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.855160    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:38.855222    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Audit-Id: 618361a3-b244-48c9-b888-1a94fd5ddfa4
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.855222    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.855222    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.855222    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:38.856066    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:38.856116    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.856116    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.856116    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.856723    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:38.858957    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Audit-Id: 587ee92b-d83e-40c4-b69c-907582239c4c
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.858957    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.858957    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.859217    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.353616    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:39.353706    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.353706    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.353706    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.353972    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.357037    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.357037    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.357037    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Audit-Id: 5c99b4e3-b39a-4af4-ad06-f4461e4d9227
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.357891    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:34:39.358400    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.358400    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.358400    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.358400    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.359027    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.362125    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.362125    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.362125    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Audit-Id: 6e05dbdb-9c6a-4950-b588-24bf8b9fd32d
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.362290    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.362290    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:39.362290    8176 pod_ready.go:81] duration metric: took 3.0243694s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:39.362290    8176 pod_ready.go:38] duration metric: took 4.3182857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:39.362290    8176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 00:34:39.378659    8176 command_runner.go:130] > -16
	I0308 00:34:39.379263    8176 ops.go:34] apiserver oom_adj: -16
	I0308 00:34:39.379263    8176 kubeadm.go:591] duration metric: took 15.807746s to restartPrimaryControlPlane
	I0308 00:34:39.379263    8176 kubeadm.go:393] duration metric: took 15.8648694s to StartCluster
	I0308 00:34:39.379263    8176 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:39.379263    8176 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:39.381130    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:39.382561    8176 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 00:34:39.382628    8176 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 00:34:39.386753    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:34:39.382628    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:39.394131    8176 out.go:177] * Enabled addons: 
	I0308 00:34:39.395079    8176 addons.go:505] duration metric: took 12.5177ms for enable addons: enabled=[]
	I0308 00:34:39.399438    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:39.637408    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:34:39.686940    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400" to be "Ready" ...
	I0308 00:34:39.687223    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.687281    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.687281    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.687281    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.687499    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.687499    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.687499    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Audit-Id: e2ea79eb-800b-4fb3-ba19-3f420a546a7b
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.687499    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.692451    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.692974    8176 node_ready.go:49] node "multinode-397400" has status "Ready":"True"
	I0308 00:34:39.693086    8176 node_ready.go:38] duration metric: took 6.017ms for node "multinode-397400" to be "Ready" ...
	I0308 00:34:39.693086    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:39.693277    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:39.693277    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.693277    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.693277    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.694091    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.694091    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Audit-Id: 227f3c4e-7a57-4b1f-b2a9-8fcce01a6aba
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.699026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.699026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.700298    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1744"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83337 chars]
	I0308 00:34:39.704815    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:39.715490    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:39.715490    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.715550    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.715550    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.718820    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:39.718884    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Audit-Id: 4e734e47-605c-41f5-942b-5c0e05460d64
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.718884    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.718944    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.718944    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.719172    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:39.916105    8176 request.go:629] Waited for 195.9047ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.916201    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.916201    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.916409    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.916409    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.919897    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:39.919897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.919897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.919897    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.919897    8176 round_trippers.go:580]     Audit-Id: 3beb2fc9-faf0-4231-b416-f8bca6263cbb
	I0308 00:34:39.920053    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.920053    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.920053    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.920298    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:40.220313    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:40.220313    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.220313    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.220313    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.224709    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:40.224709    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.224709    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Audit-Id: 6cf4fb91-3dee-4335-abd5-25dde902a7d3
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.224709    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.224936    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:40.320446    8176 request.go:629] Waited for 94.7631ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.320446    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.320446    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.320446    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.320446    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.325733    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:40.325794    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.325794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Audit-Id: 26e81ea0-f7f6-47ec-a6fe-00363ee6cbaf
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.325849    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.326149    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:40.707650    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:40.707650    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.707650    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.707650    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.708228    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:40.708228    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.708228    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.708228    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Audit-Id: e83ccf82-c3a8-4560-a791-c8ca0d8d93e2
	I0308 00:34:40.712313    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:40.713020    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.713020    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.713020    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.713020    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.715476    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:40.715476    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Audit-Id: dea190ad-b2f8-4bbd-a526-f3eed05ea914
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.715476    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.715476    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.715476    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.217553    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:41.217553    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.217553    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.217553    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.219029    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:41.219029    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Audit-Id: b8d56965-8872-4617-b8b2-d2b9a5f644f6
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.219029    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.219029    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.222249    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:41.222914    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:41.222914    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.222914    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.222914    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.223243    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:41.223243    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.223243    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.223243    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Audit-Id: 056a8edd-9502-40ac-a64c-5fbe66d3da11
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.225784    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.705756    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:41.705756    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.705756    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.705756    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.706220    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:41.706220    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Audit-Id: be058aff-c34f-44da-add2-7a541e8f6955
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.706220    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.706220    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.711557    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:41.711745    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:41.711745    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.711745    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.711745    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.712948    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:41.712948    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Audit-Id: 39ce9840-2f44-48e5-85a7-59feae5f8ada
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.712948    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.712948    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.712948    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.712948    8176 pod_ready.go:102] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:42.210295    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:42.210295    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.210295    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.210295    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.210730    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.210730    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Audit-Id: 781bbd5e-1555-4b82-90fa-57ecb1be960a
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.210730    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.210730    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.214867    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:34:42.215713    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.215713    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.215713    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.215713    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.216526    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.216526    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Audit-Id: efa9d218-9c34-4322-8ad2-fe67350d1b02
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.216526    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.216526    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.219346    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:42.219346    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:42.219346    8176 pod_ready.go:81] duration metric: took 2.5145072s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:42.219346    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:42.219346    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:42.219346    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.220407    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.220407    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.220690    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.220690    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Audit-Id: 2ce5f548-b49c-47e7-a2e1-38e281ac42ee
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.228380    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.228380    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.228380    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.228380    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:42.313617    8176 request.go:629] Waited for 84.5496ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.313890    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.313890    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.313890    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.313890    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.314092    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.314092    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Audit-Id: 15443abf-8b76-4759-bbd5-efffbb4b4523
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.314092    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.314092    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.316793    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:42.732005    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:42.732005    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.732005    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.732005    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.732536    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.732536    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.732536    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.732536    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Audit-Id: fe85a65c-0311-4144-ae08-4c5453dc32fc
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.736962    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:42.737175    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.737175    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.737175    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.737175    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.737981    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.737981    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.737981    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.737981    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Audit-Id: 0d94eec8-174c-4df5-bb76-ee429c1fc277
	I0308 00:34:42.740968    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:43.223396    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:43.223396    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.223396    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.223396    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.223858    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.223858    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Audit-Id: 4934fa24-3628-4971-b95e-6a0647baf02c
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.223858    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.223858    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.228958    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:43.229167    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:43.229167    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.229167    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.229167    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.235852    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:43.235906    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.235947    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.235947    8176 round_trippers.go:580]     Audit-Id: bdb77c69-6775-459e-a0c4-ab3c80c4b1d6
	I0308 00:34:43.235983    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.235983    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.235983    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.235983    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.236175    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:43.724719    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:43.724791    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.724791    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.724791    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.725035    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.725035    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Audit-Id: 77a50dfd-074c-4b8d-bc94-0e52ded0b5a9
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.725035    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.725035    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.728560    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:43.728686    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:43.728686    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.728686    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.728686    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.729399    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.729399    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.729399    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Audit-Id: 783dd207-8d68-45c0-a0a3-63a6971e504c
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.729399    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.732113    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:44.226706    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:44.226706    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.226706    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.226706    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.227437    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.227437    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.227437    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.227437    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Audit-Id: 3b51903a-e18c-4506-9842-29d5e1d9c308
	I0308 00:34:44.230717    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:44.231012    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:44.231012    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.231012    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.231012    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.233265    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:44.233265    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Audit-Id: 71028b31-989c-41c9-9bbf-744e5e5c8316
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.234744    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.234744    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.234744    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.234826    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:44.235669    8176 pod_ready.go:102] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:44.724632    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:44.724632    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.724632    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.724725    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.725511    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.727786    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.727786    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.727786    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Audit-Id: 06c70828-4b42-47e3-af64-f493e1f6506e
	I0308 00:34:44.728622    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:44.728757    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:44.728757    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.728757    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.728757    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.729541    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.731973    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.731973    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.731973    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Audit-Id: 6ed3e935-43e7-4830-8b03-3cee016fdf6e
	I0308 00:34:44.732064    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.732370    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:45.231292    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:45.231292    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.231399    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.231399    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.231741    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.231741    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.231741    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.235862    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.235862    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Audit-Id: 6b537cfc-2d08-4b0e-9917-2031c46a0d65
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.236043    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:45.236769    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:45.236769    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.236834    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.236834    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.240943    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:45.240943    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Audit-Id: abdd7376-b12c-4076-89fa-4de1811be3e8
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.240943    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.240943    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.241564    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:45.723181    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:45.723181    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.723181    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.723181    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.723933    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.723933    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.727254    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.727254    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Audit-Id: 899a3a0b-2fb7-4890-bc5b-b5ffb9ed36ce
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.727394    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:45.728011    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:45.728100    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.728100    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.728100    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.728298    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.728298    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Audit-Id: 6c1ed798-985b-4653-8b9b-29d53aecaedc
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.731032    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.731032    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.731499    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.229582    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:46.229651    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.229651    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.229651    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.230994    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:46.233258    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Audit-Id: fa5dd2c0-2539-456a-856c-37f4f891961c
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.233258    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.233258    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.233453    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:34:46.233983    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.233983    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.233983    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.233983    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.234811    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.234811    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.234811    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.234811    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Audit-Id: 9d7696f8-51e0-4b0d-bb11-4496192e2ff0
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.237784    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.238161    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.238294    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.238294    8176 pod_ready.go:81] duration metric: took 4.0189104s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.238294    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.238294    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:34:46.238294    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.238294    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.238294    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.240779    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:46.240779    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.240779    8176 round_trippers.go:580]     Audit-Id: 2f8cc8d2-5f99-444b-9a70-8a1ac16f9a10
	I0308 00:34:46.240779    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.242750    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.242750    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.242750    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.242750    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.243057    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:34:46.243592    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.243592    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.243592    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.243592    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.244143    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.244143    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.244143    8176 round_trippers.go:580]     Audit-Id: 86218f0a-24f9-4c53-9fab-5b9d74d256c6
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.247197    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.247197    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.247369    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.248176    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.248176    8176 pod_ready.go:81] duration metric: took 9.8815ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.248176    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.248352    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:46.248352    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.248352    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.248352    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.248870    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.248870    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Audit-Id: 1197856d-66bc-471d-ab2d-880c57b1071d
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.251300    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.251300    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.251720    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1663","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0308 00:34:46.313024    8176 request.go:629] Waited for 60.7503ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.313238    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.313296    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.313296    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.313296    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.314094    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.316005    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.316005    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.316005    8176 round_trippers.go:580]     Audit-Id: 136f54a4-e9db-4a5f-946e-2b308a98706e
	I0308 00:34:46.316068    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.316068    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.316068    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.316068    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.316335    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.762798    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:46.762897    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.762897    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.762897    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.763161    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.763161    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.766285    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.766285    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Audit-Id: 4e09f8b6-8329-49d0-ad59-22e9a4fbc912
	I0308 00:34:46.766726    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:34:46.767444    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.767444    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.767444    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.767444    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.768331    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.768331    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Audit-Id: 650df614-1940-4eba-a242-5c90d8b979bd
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.768331    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.768331    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.771114    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.771332    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.771332    8176 pod_ready.go:81] duration metric: took 523.1512ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.771332    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.912918    8176 request.go:629] Waited for 141.4168ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:46.913128    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:46.913212    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.913212    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.916301    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.916562    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.916562    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.916562    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.916562    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Audit-Id: adfa35f1-e41b-40c0-a500-5d7c7bb423be
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.919370    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"610","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0308 00:34:47.123568    8176 request.go:629] Waited for 203.4905ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:47.123568    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:47.123568    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.123568    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.123568    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.124411    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.124411    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Audit-Id: f268e230-8c85-428d-a852-85086f64ffdd
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.124411    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.127418    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.127418    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.127721    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"1341","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0308 00:34:47.127721    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:47.128276    8176 pod_ready.go:81] duration metric: took 356.3855ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.128276    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.320046    8176 request.go:629] Waited for 191.4774ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:47.320437    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:47.320437    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.320509    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.320509    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.321191    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.321191    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.321191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.324026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Audit-Id: 7b244d25-03da-4b60-8dac-7d0dc1df73f7
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.324248    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:34:47.513266    8176 request.go:629] Waited for 189.016ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:47.513449    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:47.513590    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.513590    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.513590    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.514314    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.514314    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.514314    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.514314    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.514314    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.514314    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.517380    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.517380    8176 round_trippers.go:580]     Audit-Id: fae6c14b-7c29-4c30-bd28-79989b5d6cea
	I0308 00:34:47.517556    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1765","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:34:47.517626    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:47.517626    8176 pod_ready.go:81] duration metric: took 389.3468ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:47.517626    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:47.518180    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.717836    8176 request.go:629] Waited for 199.5756ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:47.718045    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:47.718045    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.718045    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.718045    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.718409    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.718409    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.718409    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.718409    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Audit-Id: 599f684a-9792-4ce0-9605-d163cfc4d4cd
	I0308 00:34:47.721673    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:34:47.917915    8176 request.go:629] Waited for 195.1974ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:47.917915    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:47.918109    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.918109    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.918109    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.918466    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.918466    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.918466    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.918466    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Audit-Id: 6c4cb79c-0319-4eda-baac-edbbe3ec49dc
	I0308 00:34:47.921827    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:47.922357    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:47.922357    8176 pod_ready.go:81] duration metric: took 404.1731ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.922357    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:48.115035    8176 request.go:629] Waited for 192.4523ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:48.115470    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:48.115497    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.115497    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.115497    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.119099    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:48.119099    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.119099    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Audit-Id: 4e2a6673-f649-43ce-9108-49560b16ab40
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.119099    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.119099    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:34:48.326335    8176 request.go:629] Waited for 205.9565ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:48.326582    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:48.326582    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.326582    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.326582    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.327260    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.330868    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Audit-Id: 9375966d-5a38-4ad5-8ac2-7b83d8db35b0
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.330868    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.330868    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.331077    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:48.331200    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:48.331200    8176 pod_ready.go:81] duration metric: took 408.8397ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:48.331200    8176 pod_ready.go:38] duration metric: took 8.6380334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:48.331200    8176 api_server.go:52] waiting for apiserver process to appear ...
	I0308 00:34:48.340060    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:48.363877    8176 command_runner.go:130] > 1978
	I0308 00:34:48.363877    8176 api_server.go:72] duration metric: took 8.9811648s to wait for apiserver process to appear ...
	I0308 00:34:48.363991    8176 api_server.go:88] waiting for apiserver healthz status ...
	I0308 00:34:48.363991    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:48.369939    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:34:48.372421    8176 round_trippers.go:463] GET https://172.20.61.151:8443/version
	I0308 00:34:48.372470    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.372470    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.372497    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.375787    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:48.375787    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Content-Length: 264
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Audit-Id: deb4d218-80ac-49e7-874e-ff4126b2472c
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.375787    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.375787    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.375787    8176 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0308 00:34:48.375787    8176 api_server.go:141] control plane version: v1.28.4
	I0308 00:34:48.375787    8176 api_server.go:131] duration metric: took 11.7956ms to wait for apiserver health ...
	I0308 00:34:48.375787    8176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 00:34:48.514599    8176 request.go:629] Waited for 138.6008ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.514679    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.514679    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.514679    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.514679    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.521619    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.521619    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Audit-Id: c8aa2d3f-e087-4ae3-9f84-747bdc0afce7
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.521619    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.521619    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.523586    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0308 00:34:48.527648    8176 system_pods.go:59] 12 kube-system pods found
	I0308 00:34:48.527648    8176 system_pods.go:61] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:34:48.527648    8176 system_pods.go:61] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:48.527755    8176 system_pods.go:74] duration metric: took 151.9674ms to wait for pod list to return data ...
	I0308 00:34:48.527755    8176 default_sa.go:34] waiting for default service account to be created ...
	I0308 00:34:48.716141    8176 request.go:629] Waited for 188.2111ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:34:48.716311    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:34:48.716311    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.716311    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.716311    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.717199    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.717199    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.717199    8176 round_trippers.go:580]     Content-Length: 262
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Audit-Id: f89fabd5-b48c-4458-bfe2-86fee162cffc
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.719805    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.719805    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.719805    8176 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"095cdd29-7997-44a2-8aa0-51adc17297b9","resourceVersion":"333","creationTimestamp":"2024-03-08T00:13:51Z"}}]}
	I0308 00:34:48.719873    8176 default_sa.go:45] found service account: "default"
	I0308 00:34:48.719873    8176 default_sa.go:55] duration metric: took 192.1162ms for default service account to be created ...
	I0308 00:34:48.719873    8176 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 00:34:48.911262    8176 request.go:629] Waited for 191.3867ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.911387    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.911649    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.911649    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.911649    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.919625    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.919625    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.919625    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.919625    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Audit-Id: 643f6ee0-a787-486a-8419-4c1fdb615dce
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.920689    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0308 00:34:48.924042    8176 system_pods.go:86] 12 kube-system pods found
	I0308 00:34:48.924042    8176 system_pods.go:89] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:48.924611    8176 system_pods.go:126] duration metric: took 204.736ms to wait for k8s-apps to be running ...
	I0308 00:34:48.924611    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:34:48.934478    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:34:48.957476    8176 system_svc.go:56] duration metric: took 32.8645ms WaitForService to wait for kubelet
	I0308 00:34:48.957535    8176 kubeadm.go:576] duration metric: took 9.5748171s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:34:48.957535    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:34:49.116129    8176 request.go:629] Waited for 158.1951ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:49.116254    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:49.116254    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:49.116254    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:49.116254    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:49.117044    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:49.117044    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:49.117044    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:49 GMT
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Audit-Id: 17a1bfa4-24a7-4dd4-8376-005ff18d8454
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:49.121468    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:49.121681    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15500 chars]
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:105] duration metric: took 165.3182ms to run NodePressure ...
	I0308 00:34:49.122917    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:34:49.122917    8176 start.go:245] waiting for cluster config update ...
	I0308 00:34:49.122917    8176 start.go:254] writing updated cluster config ...
	I0308 00:34:49.126835    8176 out.go:177] 
	I0308 00:34:49.130331    8176 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:49.138155    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:49.138155    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:34:49.143344    8176 out.go:177] * Starting "multinode-397400-m02" worker node in "multinode-397400" cluster
	I0308 00:34:49.147063    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:34:49.147131    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:34:49.147535    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:34:49.147535    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:34:49.147535    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:34:49.149827    8176 start.go:360] acquireMachinesLock for multinode-397400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:34:49.149945    8176 start.go:364] duration metric: took 118µs to acquireMachinesLock for "multinode-397400-m02"
	I0308 00:34:49.149945    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:34:49.149945    8176 fix.go:54] fixHost starting: m02
	I0308 00:34:49.150553    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:34:50.983263    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:34:50.983322    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:50.983322    8176 fix.go:112] recreateIfNeeded on multinode-397400-m02: state=Stopped err=<nil>
	W0308 00:34:50.983322    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:34:50.987182    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400-m02" ...
	I0308 00:34:50.989569    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400-m02
	I0308 00:34:53.753164    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:34:53.753224    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:53.753279    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:34:53.753364    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:57.966149    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:34:57.971110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:58.978025    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:00.944166    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:00.944449    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:00.944626    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:03.204005    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:03.214059    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:04.226300    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:06.216895    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:06.223560    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:06.223653    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:08.473806    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:08.473806    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:09.485466    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:11.456844    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:11.456976    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:11.456976    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:13.762002    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:13.762002    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:14.780556    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:16.730333    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:16.730621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:16.730715    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:18.950810    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:18.950810    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:18.963497    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:20.865017    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:20.865017    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:20.874721    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:23.084481    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:23.094276    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:23.094745    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:35:23.097354    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:35:23.097446    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:27.239255    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:27.245000    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:27.248730    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:27.250085    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:27.250085    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:35:27.377743    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:35:27.377743    8176 buildroot.go:166] provisioning hostname "multinode-397400-m02"
	I0308 00:35:27.377743    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:31.454485    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:31.464380    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:31.469728    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:31.470271    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:31.470271    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400-m02 && echo "multinode-397400-m02" | sudo tee /etc/hostname
	I0308 00:35:31.619093    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400-m02
	
	I0308 00:35:31.619147    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:35.652961    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:35.662869    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:35.668274    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:35.668724    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:35.668789    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:35:35.812652    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:35:35.812754    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:35:35.812829    8176 buildroot.go:174] setting up certificates
	I0308 00:35:35.812829    8176 provision.go:84] configureAuth start
	I0308 00:35:35.812893    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:37.660057    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:37.660308    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:37.660410    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:39.837022    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:39.837022    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:39.848074    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:41.699439    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:41.709461    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:41.709461    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:43.964833    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:43.975119    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:43.975119    8176 provision.go:143] copyHostCerts
	I0308 00:35:43.975258    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:35:43.975415    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:35:43.975415    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:35:43.975415    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:35:43.976642    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:35:43.976642    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:35:43.977207    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:35:43.977518    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:35:43.978308    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:35:43.978840    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:35:43.978840    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:35:43.979228    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:35:43.980121    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400-m02 san=[127.0.0.1 172.20.50.67 localhost minikube multinode-397400-m02]
	I0308 00:35:44.088419    8176 provision.go:177] copyRemoteCerts
	I0308 00:35:44.110694    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:35:44.110694    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:45.958690    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:45.958690    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:45.971572    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:48.202091    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:48.202091    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:48.212139    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:35:48.315275    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2045408s)
	I0308 00:35:48.315275    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:35:48.315894    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 00:35:48.357633    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:35:48.357633    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:35:48.397317    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:35:48.397705    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0308 00:35:48.437704    8176 provision.go:87] duration metric: took 12.6247209s to configureAuth
	I0308 00:35:48.437704    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:35:48.437704    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:35:48.438319    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:50.277108    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:50.277275    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:50.277275    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:52.510023    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:52.520722    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:52.525884    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:52.526589    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:52.526589    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:35:52.656647    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:35:52.656733    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:35:52.656816    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:35:52.656816    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:54.536025    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:54.536206    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:54.536261    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:56.732660    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:56.742203    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:56.747768    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:56.748322    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:56.748389    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.61.151"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:35:56.895715    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.61.151
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:35:56.895813    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:58.737184    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:58.737184    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:58.746540    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:00.958110    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:00.967751    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:00.975827    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:00.975827    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:00.975827    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:36:02.248533    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:36:02.248601    8176 machine.go:97] duration metric: took 39.1508368s to provisionDockerMachine
	I0308 00:36:02.248630    8176 start.go:293] postStartSetup for "multinode-397400-m02" (driver="hyperv")
	I0308 00:36:02.248655    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:36:02.260943    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:36:02.260943    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:06.323006    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:06.323006    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:06.330589    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:06.440312    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1793292s)
	I0308 00:36:06.450955    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:36:06.453809    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:36:06.453809    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:36:06.453809    8176 command_runner.go:130] > ID=buildroot
	I0308 00:36:06.453809    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:36:06.453809    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:36:06.457759    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:36:06.457759    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:36:06.457945    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:36:06.458426    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:36:06.458426    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:36:06.459144    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:36:06.484248    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:36:06.528894    8176 start.go:296] duration metric: took 4.2801975s for postStartSetup
	I0308 00:36:06.528894    8176 fix.go:56] duration metric: took 1m17.3782186s for fixHost
	I0308 00:36:06.528894    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:08.364945    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:08.364945    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:08.374514    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:10.644834    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:10.644834    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:10.650763    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:10.651375    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:10.651400    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:36:10.782653    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858170.797430320
	
	I0308 00:36:10.782653    8176 fix.go:216] guest clock: 1709858170.797430320
	I0308 00:36:10.782653    8176 fix.go:229] Guest: 2024-03-08 00:36:10.79743032 +0000 UTC Remote: 2024-03-08 00:36:06.5288941 +0000 UTC m=+208.769560601 (delta=4.26853622s)
	I0308 00:36:10.782653    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:12.662073    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:12.662073    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:12.671760    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:14.912911    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:14.912911    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:14.928526    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:14.928736    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:14.928736    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858170
	I0308 00:36:15.070433    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:36:10 UTC 2024
	
	I0308 00:36:15.070433    8176 fix.go:236] clock set: Fri Mar  8 00:36:10 UTC 2024
	 (err=<nil>)
	I0308 00:36:15.070433    8176 start.go:83] releasing machines lock for "multinode-397400-m02", held for 1m25.919677s
	I0308 00:36:15.071057    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:16.931460    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:16.931460    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:16.931611    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:19.219316    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:19.230693    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:19.230945    8176 out.go:177] * Found network options:
	I0308 00:36:19.235656    8176 out.go:177]   - NO_PROXY=172.20.61.151
	W0308 00:36:19.238019    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:36:19.240089    8176 out.go:177]   - NO_PROXY=172.20.61.151
	W0308 00:36:19.241028    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:36:19.241028    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:36:19.245975    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:36:19.245975    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:19.254420    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:36:19.254420    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:21.205917    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:21.213207    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:21.213207    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:21.230099    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:21.230099    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:21.231738    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:23.590096    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:23.600813    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:23.601260    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:23.622300    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:23.622300    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:23.623460    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:23.813299    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:36:23.813618    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0308 00:36:23.813724    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5677062s)
	I0308 00:36:23.813801    8176 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5592604s)
	W0308 00:36:23.813858    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:36:23.826444    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:36:23.843194    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:36:23.853245    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:36:23.853245    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:36:23.853416    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:36:23.888149    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:36:23.897644    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:36:23.928573    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:36:23.936016    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:36:23.957800    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:36:23.984856    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:36:24.017387    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:36:24.046880    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:36:24.073509    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:36:24.103017    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:36:24.132538    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:36:24.143973    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:36:24.160643    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:36:24.192019    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:24.360881    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:36:24.393885    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:36:24.410923    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:36:24.431080    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:36:24.431080    8176 command_runner.go:130] > [Unit]
	I0308 00:36:24.431080    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:36:24.431080    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:36:24.431080    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:36:24.431080    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:36:24.431080    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:36:24.431080    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:36:24.431080    8176 command_runner.go:130] > [Service]
	I0308 00:36:24.431694    8176 command_runner.go:130] > Type=notify
	I0308 00:36:24.431694    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:36:24.431694    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151
	I0308 00:36:24.431694    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:36:24.431747    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:36:24.431772    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:36:24.431772    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:36:24.431772    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:36:24.431904    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:36:24.431904    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:36:24.431945    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:36:24.431945    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:36:24.431945    8176 command_runner.go:130] > ExecStart=
	I0308 00:36:24.431980    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:36:24.431980    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:36:24.432019    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:36:24.432019    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:36:24.432019    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:36:24.432055    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:36:24.432096    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:36:24.432096    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:36:24.432096    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:36:24.432096    8176 command_runner.go:130] > Delegate=yes
	I0308 00:36:24.432131    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:36:24.432131    8176 command_runner.go:130] > KillMode=process
	I0308 00:36:24.432131    8176 command_runner.go:130] > [Install]
	I0308 00:36:24.432173    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:36:24.443212    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:36:24.478573    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:36:24.521721    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:36:24.553443    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:36:24.586011    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:36:24.651351    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:36:24.672741    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:36:24.704854    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:36:24.716759    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:36:24.722392    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:36:24.733413    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:36:24.750143    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:36:24.794321    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:36:24.966303    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:36:25.125838    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:36:25.125908    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:36:25.166197    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:25.340343    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:36:26.904352    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.563957s)
	I0308 00:36:26.916489    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:36:26.949247    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:36:26.979878    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:36:27.150002    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:36:27.308625    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:27.477627    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:36:27.517767    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:36:27.549267    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:27.721282    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:36:27.815082    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:36:27.826154    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:36:27.833371    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:36:27.834503    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:36:27.834503    8176 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0308 00:36:27.834503    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:36:27.834503    8176 command_runner.go:130] > Access: 2024-03-08 00:36:27.759423013 +0000
	I0308 00:36:27.834588    8176 command_runner.go:130] > Modify: 2024-03-08 00:36:27.759423013 +0000
	I0308 00:36:27.834608    8176 command_runner.go:130] > Change: 2024-03-08 00:36:27.763423041 +0000
	I0308 00:36:27.834608    8176 command_runner.go:130] >  Birth: -
	I0308 00:36:27.834608    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:36:27.846885    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:36:27.849988    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:36:27.863585    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:36:27.930186    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:36:27.930353    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:36:27.939128    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:36:27.967277    8176 command_runner.go:130] > 24.0.7
	I0308 00:36:27.976635    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:36:28.011011    8176 command_runner.go:130] > 24.0.7
	I0308 00:36:28.016997    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:36:28.022193    8176 out.go:177]   - env NO_PROXY=172.20.61.151
	I0308 00:36:28.025119    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:36:28.026887    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:36:28.030240    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:36:28.030240    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:36:28.043325    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:36:28.049499    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:36:28.066841    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:36:28.067687    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:28.068374    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:29.942483    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:29.942483    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:29.953177    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:29.953925    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.50.67
	I0308 00:36:29.953925    8176 certs.go:194] generating shared ca certs ...
	I0308 00:36:29.953992    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:36:29.954636    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:36:29.954966    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:36:29.955175    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:36:29.955455    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:36:29.955753    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:36:29.955918    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:36:29.955918    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:36:29.956526    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:36:29.956767    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:36:29.956791    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:36:29.956791    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:36:29.957454    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:36:29.957488    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:36:29.957488    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:29.958147    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:36:29.958288    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:36:29.958467    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:36:30.003848    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:36:30.048433    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:36:30.090490    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:36:30.133399    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:36:30.173893    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:36:30.215607    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:36:30.266702    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:36:30.274022    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:36:30.283731    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:36:30.312333    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.318712    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.318712    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.328071    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.336920    8176 command_runner.go:130] > b5213941
	I0308 00:36:30.348845    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:36:30.377781    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:36:30.408676    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.411242    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.414871    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.425512    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.433383    8176 command_runner.go:130] > 51391683
	I0308 00:36:30.445073    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:36:30.471651    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:36:30.500178    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.503199    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.503199    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.517338    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.525569    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:36:30.535655    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:36:30.564860    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:36:30.566643    8176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:36:30.570242    8176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:36:30.570242    8176 kubeadm.go:928] updating node {m02 172.20.50.67 8443 v1.28.4 docker false true} ...
	I0308 00:36:30.570242    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.50.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:36:30.580199    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubeadm
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubectl
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubelet
	I0308 00:36:30.589246    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:36:30.608965    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0308 00:36:30.625722    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0308 00:36:30.654436    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:36:30.693370    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:36:30.699245    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:36:30.726715    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:30.913741    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:36:30.940078    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:30.940421    8176 start.go:316] joinCluster: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:36:30.941075    8176 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:30.941075    8176 host.go:66] Checking if "multinode-397400-m02" exists ...
	I0308 00:36:30.941075    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:36:30.941915    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:30.942533    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:32.876755    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:32.886774    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:32.886774    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:32.887029    8176 api_server.go:166] Checking apiserver status ...
	I0308 00:36:32.898998    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:36:32.898998    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:34.784576    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:34.784576    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:34.795451    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:37.031844    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:36:37.031844    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:37.032363    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:36:37.148798    8176 command_runner.go:130] > 1978
	I0308 00:36:37.148909    8176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2498082s)
	I0308 00:36:37.160582    8176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup
	W0308 00:36:37.174692    8176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:36:37.184936    8176 ssh_runner.go:195] Run: ls
	I0308 00:36:37.191385    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:36:37.197501    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:36:37.208890    8176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0308 00:36:37.352830    8176 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jvzwq, kube-system/kube-proxy-gw9w9
	I0308 00:36:40.393632    8176 command_runner.go:130] > node/multinode-397400-m02 cordoned
	I0308 00:36:40.393765    8176 command_runner.go:130] > pod "busybox-5b5d89c9d6-ctt42" has DeletionTimestamp older than 1 seconds, skipping
	I0308 00:36:40.393765    8176 command_runner.go:130] > node/multinode-397400-m02 drained
	I0308 00:36:40.393765    8176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1848448s)
	I0308 00:36:40.393890    8176 node.go:125] successfully drained node "multinode-397400-m02"
	I0308 00:36:40.394014    8176 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0308 00:36:40.394104    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:42.240007    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:42.240007    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:42.250124    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:44.527473    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:44.527473    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:44.527596    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:44.944237    8176 command_runner.go:130] ! W0308 00:36:44.960939    1525 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0308 00:36:45.496320    8176 command_runner.go:130] ! W0308 00:36:45.511660    1525 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67: output: E0308 00:36:45.214172    1589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-ctt42_default\" network: cni config uninitialized" podSandboxID="e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67"
	I0308 00:36:45.496320    8176 command_runner.go:130] ! time="2024-03-08T00:36:45Z" level=fatal msg="stopping the pod sandbox \"e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-ctt42_default\" network: cni config uninitialized"
	I0308 00:36:45.496320    8176 command_runner.go:130] ! : exit status 1
	I0308 00:36:45.518465    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Stopping the kubelet service
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0308 00:36:45.518465    8176 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0308 00:36:45.518465    8176 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0308 00:36:45.518465    8176 command_runner.go:130] > to reset your system's IPVS tables.
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0308 00:36:45.518465    8176 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0308 00:36:45.518465    8176 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.1244031s)
	I0308 00:36:45.518465    8176 node.go:152] successfully reset node "multinode-397400-m02"
	I0308 00:36:45.519553    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:36:45.520638    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:36:45.521738    8176 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 00:36:45.522344    8176 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0308 00:36:45.522606    8176 round_trippers.go:463] DELETE https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:45.522606    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:45.522606    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:45.522606    8176 round_trippers.go:473]     Content-Type: application/json
	I0308 00:36:45.522685    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:45.542363    8176 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0308 00:36:45.542501    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:45.542501    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:45.542501    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:45.542597    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:45.542597    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Content-Length: 171
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:45 GMT
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Audit-Id: 1f27f500-c60c-431d-9201-eb33ffb7c616
	I0308 00:36:45.542709    8176 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-397400-m02","kind":"nodes","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d"}}
	I0308 00:36:45.542807    8176 node.go:173] successfully deleted node "multinode-397400-m02"
	I0308 00:36:45.542878    8176 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:45.542936    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 00:36:45.543050    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:47.364924    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:47.375639    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:47.375639    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:49.617106    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:36:49.627809    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:49.627809    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:36:49.814817    8176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:36:49.814900    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2718977s)
	I0308 00:36:49.815007    8176 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:49.815091    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02"
	I0308 00:36:50.032201    8176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0308 00:36:52.327438    8176 command_runner.go:130] > This node has joined the cluster:
	I0308 00:36:52.327438    8176 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0308 00:36:52.327438    8176 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0308 00:36:52.327438    8176 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0308 00:36:52.327568    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02": (2.5124531s)
	I0308 00:36:52.327641    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 00:36:52.523779    8176 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0308 00:36:52.728688    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400-m02 minikube.k8s.io/updated_at=2024_03_08T00_36_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=false
	I0308 00:36:52.865519    8176 command_runner.go:130] > node/multinode-397400-m02 labeled
	I0308 00:36:52.865619    8176 start.go:318] duration metric: took 21.9249897s to joinCluster
	I0308 00:36:52.865619    8176 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:52.870529    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:36:52.866337    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:52.883641    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:53.101431    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:36:53.136287    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:36:53.136946    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:36:53.137886    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:36:53.138090    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:53.138136    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:53.138136    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:53.138181    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:53.142716    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:36:53.142716    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Audit-Id: d642fff0-235e-4548-8168-848b99b36317
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:53.142716    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:53.142716    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:53 GMT
	I0308 00:36:53.142716    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:53.641753    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:53.641753    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:53.641753    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:53.641753    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:53.642484    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:53.646014    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:53.646014    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:53.646014    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:53 GMT
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Audit-Id: d698ab4e-8732-4cee-9c6e-de68792c624e
	I0308 00:36:53.646014    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:54.160444    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:54.160444    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:54.160444    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:54.160444    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:54.160977    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:54.164571    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:54.164571    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:54.164571    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:54 GMT
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Audit-Id: d3529c63-ac76-439a-9500-192a0eabc119
	I0308 00:36:54.164813    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:54.644205    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:54.644294    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:54.644294    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:54.644294    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:54.644731    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:54.649050    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:54.649050    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:54.649050    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:54 GMT
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Audit-Id: 1ecf8ec5-60ec-4b8b-a696-27110ae0640e
	I0308 00:36:54.649050    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:55.153014    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:55.153014    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:55.153118    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:55.153118    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:55.153377    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:55.153377    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:55.153377    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:55.153377    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:55 GMT
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Audit-Id: 7df95c67-cbb5-4d1a-88d6-d60acd8c4306
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:55.157394    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:55.157737    8176 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:36:55.640996    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:55.641067    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:55.641067    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:55.641067    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:55.641323    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:55.641323    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:55.644641    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:55.644641    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:55 GMT
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Audit-Id: f4a9e549-1ae4-406f-88e3-0fe28040b580
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:55.644867    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:56.140737    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:56.140829    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:56.140829    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:56.140829    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:56.142258    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:56.143929    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Audit-Id: ea139d21-04e2-4ac9-85ed-022dfc5b53de
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:56.144002    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:56.144002    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:56.144002    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:56 GMT
	I0308 00:36:56.144002    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:56.649461    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:56.649670    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:56.649670    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:56.649670    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:56.650010    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:56.650010    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:56.650010    8176 round_trippers.go:580]     Audit-Id: e16a2e11-2624-4003-ba02-375cfac37da1
	I0308 00:36:56.650010    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:56.652903    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:56.652903    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:56.652903    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:56.652903    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:56 GMT
	I0308 00:36:56.653029    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.153713    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:57.153713    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:57.153713    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:57.153713    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:57.154258    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:57.154258    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:57.157052    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:57.157052    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:57 GMT
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Audit-Id: 6ffdc11e-7ed4-45db-a0b0-fd3f444d9b0a
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:57.157246    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.642752    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:57.642752    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:57.642828    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:57.642828    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:57.643090    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:57.643090    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Audit-Id: d9e68acf-40ed-4eb9-8861-38bab5a7d765
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:57.643090    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:57.643090    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:57 GMT
	I0308 00:36:57.646096    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.646183    8176 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:36:58.139920    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.139920    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.139920    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.139920    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.140722    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.143368    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Audit-Id: a87d373c-50a9-43ad-abb5-8bd8710616cd
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.143368    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.143368    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.143586    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1925","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0308 00:36:58.143939    8176 node_ready.go:49] node "multinode-397400-m02" has status "Ready":"True"
	I0308 00:36:58.143939    8176 node_ready.go:38] duration metric: took 5.0059577s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:36:58.143939    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:36:58.143939    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:36:58.143939    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.143939    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.143939    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.144753    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.144753    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.144753    8176 round_trippers.go:580]     Audit-Id: dff7b66e-0987-414a-ba9a-20dea66dbeb2
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.149137    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.149137    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.151102    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1927"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82549 chars]
	I0308 00:36:58.154982    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.155331    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:36:58.155331    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.155331    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.155331    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.158404    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.158404    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.158404    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.158404    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Audit-Id: a2190457-1023-4cb0-8349-1411f6ebedff
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.158404    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:36:58.159117    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.159211    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.159211    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.159211    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.163947    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:36:58.163947    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Audit-Id: 45cee827-f036-47d8-b7c9-0e0a3a5ed34d
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.163947    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.163947    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.163947    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.164651    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.164651    8176 pod_ready.go:81] duration metric: took 9.6687ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.164651    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.164651    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:36:58.164651    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.164651    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.164651    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.167280    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.167280    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Audit-Id: f7242511-c85c-4441-b950-792f16811bc0
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.167280    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.167280    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.168812    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:36:58.168903    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.168903    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.168903    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.168903    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.171892    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.171998    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Audit-Id: a00f9dd1-98f0-482b-8890-9051cde55f76
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.171998    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.172041    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.172041    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.172065    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.172719    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.172719    8176 pod_ready.go:81] duration metric: took 8.0676ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.172787    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.172898    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:36:58.172942    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.172942    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.172989    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.173720    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.173720    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Audit-Id: f86ef96a-2ce2-4795-8528-571963e40341
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.173720    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.175804    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.175804    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.175882    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:36:58.176468    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.176468    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.176468    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.176468    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.177162    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.179084    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Audit-Id: 95f11dc7-68cf-4f37-ac55-f282c216ff10
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.179165    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.179165    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.179165    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.179561    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.179854    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.179854    8176 pod_ready.go:81] duration metric: took 7.0668ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.179854    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.179854    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:36:58.179854    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.179854    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.179854    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.180671    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.180671    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.180671    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.180671    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Audit-Id: 9b72e4f6-391c-4b60-8577-225f365d58d5
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.183280    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:36:58.183867    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.183867    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.183941    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.183941    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.185543    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:58.186956    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Audit-Id: b4fe4f9a-cfae-4a0a-9628-409d993ea51b
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.186956    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.186956    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.187202    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.187232    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.187232    8176 pod_ready.go:81] duration metric: took 7.3778ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.187232    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.348406    8176 request.go:629] Waited for 161.1728ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:36:58.348670    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:36:58.348752    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.348752    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.348752    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.348949    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.348949    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.348949    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.348949    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.348949    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Audit-Id: 7089cce2-746e-4cfc-ae4d-e001ed2b7c0f
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.355473    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"1907","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5538 chars]
	I0308 00:36:58.543785    8176 request.go:629] Waited for 188.1061ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.543958    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.543958    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.543958    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.544032    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.544794    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.544794    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Audit-Id: ecb7c41b-abd3-4524-b4df-5a308fbec085
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.544794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.544794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.547837    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1925","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0308 00:36:58.548314    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.548314    8176 pod_ready.go:81] duration metric: took 361.079ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.548436    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.759209    8176 request.go:629] Waited for 210.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:36:58.759209    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:36:58.759436    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.759436    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.759436    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.760171    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.760171    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Audit-Id: 1bb11b25-337a-447b-a337-324b4d0777ee
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.763105    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.763105    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.763105    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.763225    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:36:58.947384    8176 request.go:629] Waited for 183.2452ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:36:58.947518    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:36:58.947518    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.947518    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.947518    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.947851    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.951127    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Audit-Id: fd602a3e-e0bc-47d1-b17e-04dbc5ee4e60
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.951127    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.951127    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.951554    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1765","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:36:58.952041    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:36:58.952041    8176 pod_ready.go:81] duration metric: took 403.6011ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:36:58.952041    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:36:58.952154    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.140523    8176 request.go:629] Waited for 188.1889ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:36:59.140523    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:36:59.140523    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.140523    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.140523    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.144435    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:36:59.146666    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.146666    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Audit-Id: 926aaf78-6241-4be7-bcb1-cdc8bd53047d
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.146666    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.146900    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:36:59.342793    8176 request.go:629] Waited for 195.8912ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.343001    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.343117    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.343117    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.343117    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.349069    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:36:59.349069    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Audit-Id: d1b5ca68-eee3-4943-b4a6-263ee0ab1af6
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.349069    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.349069    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.349069    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:59.349924    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:59.349924    8176 pod_ready.go:81] duration metric: took 397.766ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.349924    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.549671    8176 request.go:629] Waited for 199.3324ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:36:59.549702    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:36:59.549702    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.549702    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.549702    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.550871    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:59.550871    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Audit-Id: 468f48cd-1b06-4aa5-8fcf-d94054278419
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.550871    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.550871    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.554221    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:36:59.745236    8176 request.go:629] Waited for 190.2545ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.745355    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.745355    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.745506    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.745506    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.745792    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:59.749049    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.749049    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.749132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.749132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Audit-Id: 9984b193-d770-4951-ae38-45e827f98258
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.749132    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:59.749813    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:59.749813    8176 pod_ready.go:81] duration metric: took 399.8859ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.749813    8176 pod_ready.go:38] duration metric: took 1.6058586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:36:59.749813    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:36:59.762653    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:36:59.788133    8176 system_svc.go:56] duration metric: took 38.3199ms WaitForService to wait for kubelet
	I0308 00:36:59.788240    8176 kubeadm.go:576] duration metric: took 6.9224492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:36:59.788240    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:36:59.948084    8176 request.go:629] Waited for 159.4711ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:36:59.948289    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:36:59.948289    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.948289    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.948289    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.948605    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:59.952470    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Audit-Id: 688aff5e-1497-40cd-8be8-f5bbd8e3cef7
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.952470    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.952470    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.953236    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1930"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15485 chars]
	I0308 00:36:59.953784    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.953784    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.954332    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.954332    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:105] duration metric: took 166.09ms to run NodePressure ...
	I0308 00:36:59.954332    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:36:59.954332    8176 start.go:254] writing updated cluster config ...
	I0308 00:36:59.958246    8176 out.go:177] 
	I0308 00:36:59.961232    8176 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:59.967583    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:59.967583    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:36:59.975242    8176 out.go:177] * Starting "multinode-397400-m03" worker node in "multinode-397400" cluster
	I0308 00:36:59.975577    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:36:59.975577    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:36:59.978286    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:36:59.978558    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:36:59.978651    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:36:59.986898    8176 start.go:360] acquireMachinesLock for multinode-397400-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:36:59.986898    8176 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-397400-m03"
	I0308 00:36:59.987517    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:36:59.987517    8176 fix.go:54] fixHost starting: m03
	I0308 00:36:59.987517    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:01.815288    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:37:01.815288    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:01.825661    8176 fix.go:112] recreateIfNeeded on multinode-397400-m03: state=Stopped err=<nil>
	W0308 00:37:01.825661    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:37:01.829418    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400-m03" ...
	I0308 00:37:01.832003    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400-m03
	I0308 00:37:04.617499    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:04.617499    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:04.617499    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:37:04.627069    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:06.697701    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:06.708796    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:06.708863    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:08.913028    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:08.917118    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:09.920924    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:11.946471    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:11.946686    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:11.946686    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:14.212777    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:14.222616    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:15.232693    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:17.191145    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:17.191145    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:17.193453    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:19.421215    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:19.421215    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:20.430191    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:22.405409    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:22.405409    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:22.405871    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:24.705076    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:24.705076    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:25.719882    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:27.754910    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:27.754910    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:27.764685    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:30.033908    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:30.046843    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:30.050287    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:31.915340    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:31.928755    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:31.928818    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:34.126639    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:34.126639    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:34.137265    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:37:34.139967    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:37:34.140094    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:36.010262    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:36.010262    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:36.020390    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:38.248762    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:38.248762    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:38.265133    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:38.265257    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:38.265257    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:37:38.392134    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:37:38.392226    8176 buildroot.go:166] provisioning hostname "multinode-397400-m03"
	I0308 00:37:38.392294    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:40.252188    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:40.262557    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:40.262557    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:42.496093    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:42.505054    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:42.511578    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:42.511713    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:42.511713    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400-m03 && echo "multinode-397400-m03" | sudo tee /etc/hostname
	I0308 00:37:42.655016    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400-m03
	
	I0308 00:37:42.655112    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:44.522061    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:44.537500    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:44.537619    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:46.812705    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:46.812705    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:46.817520    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:46.818515    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:46.818515    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:37:46.958186    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:37:46.958186    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:37:46.958186    8176 buildroot.go:174] setting up certificates
	I0308 00:37:46.958186    8176 provision.go:84] configureAuth start
	I0308 00:37:46.958186    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:48.845567    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:48.845567    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:48.845761    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:51.134648    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:51.134866    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:51.134977    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:53.012572    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:53.012572    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:53.012656    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:55.309630    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:55.309694    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:55.309694    8176 provision.go:143] copyHostCerts
	I0308 00:37:55.309694    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:37:55.309694    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:37:55.309694    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:37:55.310460    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:37:55.311284    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:37:55.311825    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:37:55.311941    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:37:55.312000    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:37:55.313152    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:37:55.313393    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:37:55.313449    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:37:55.313449    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:37:55.314312    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400-m03 san=[127.0.0.1 172.20.53.127 localhost minikube multinode-397400-m03]
	I0308 00:37:55.739436    8176 provision.go:177] copyRemoteCerts
	I0308 00:37:55.756516    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:37:55.756642    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:57.641399    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:57.641458    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:57.641458    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:59.913352    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:59.913352    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:59.913626    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:00.015351    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2586563s)
	I0308 00:38:00.015351    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:38:00.015351    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:38:00.061523    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:38:00.061728    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0308 00:38:00.096723    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:38:00.102065    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 00:38:00.145010    8176 provision.go:87] duration metric: took 13.1866987s to configureAuth
	I0308 00:38:00.145010    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:38:00.145710    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:00.145854    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:02.046123    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:02.052762    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:02.052762    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:04.297886    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:04.307504    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:04.313436    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:04.313955    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:04.313955    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:38:04.436007    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:38:04.436007    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:38:04.436007    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:38:04.436539    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:06.348727    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:06.349003    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:06.349003    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:08.614664    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:08.614713    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:08.619662    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:08.619662    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:08.620181    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.61.151"
	Environment="NO_PROXY=172.20.61.151,172.20.50.67"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:38:08.762595    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.61.151
	Environment=NO_PROXY=172.20.61.151,172.20.50.67
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:38:08.762686    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:10.617689    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:10.623394    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:10.623482    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:12.872267    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:12.872267    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:12.883177    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:12.883982    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:12.884010    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:38:14.047323    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:38:14.047323    8176 machine.go:97] duration metric: took 39.9069196s to provisionDockerMachine
	I0308 00:38:14.047323    8176 start.go:293] postStartSetup for "multinode-397400-m03" (driver="hyperv")
	I0308 00:38:14.047323    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:38:14.062410    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:38:14.062410    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:15.925138    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:15.925138    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:15.925213    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:18.162111    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:18.171636    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:18.172065    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:18.273305    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.210855s)
	I0308 00:38:18.292569    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:38:18.299161    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:38:18.299161    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:38:18.299161    8176 command_runner.go:130] > ID=buildroot
	I0308 00:38:18.299161    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:38:18.299161    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:38:18.299161    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:38:18.299276    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:38:18.299438    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:38:18.300536    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:38:18.300618    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:38:18.309347    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:38:18.320905    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:38:18.368301    8176 start.go:296] duration metric: took 4.3209373s for postStartSetup
	I0308 00:38:18.368301    8176 fix.go:56] duration metric: took 1m18.3800388s for fixHost
	I0308 00:38:18.368301    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:22.460042    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:22.463260    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:22.468142    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:22.468842    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:22.468842    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:38:22.594393    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858302.608765026
	
	I0308 00:38:22.594393    8176 fix.go:216] guest clock: 1709858302.608765026
	I0308 00:38:22.594393    8176 fix.go:229] Guest: 2024-03-08 00:38:22.608765026 +0000 UTC Remote: 2024-03-08 00:38:18.3683013 +0000 UTC m=+340.607715401 (delta=4.240463726s)
	I0308 00:38:22.594393    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:24.448100    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:24.448100    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:24.448188    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:26.682698    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:26.682698    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:26.697628    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:26.698495    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:26.698495    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858302
	I0308 00:38:26.834776    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:38:22 UTC 2024
	
	I0308 00:38:26.834776    8176 fix.go:236] clock set: Fri Mar  8 00:38:22 UTC 2024
	 (err=<nil>)
	I0308 00:38:26.834776    8176 start.go:83] releasing machines lock for "multinode-397400-m03", held for 1m26.8470524s
	I0308 00:38:26.835321    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:28.706471    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:28.706471    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:28.716677    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:30.937753    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:30.939815    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:30.943138    8176 out.go:177] * Found network options:
	I0308 00:38:30.947229    8176 out.go:177]   - NO_PROXY=172.20.61.151,172.20.50.67
	W0308 00:38:30.949070    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.950090    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:38:30.952229    8176 out.go:177]   - NO_PROXY=172.20.61.151,172.20.50.67
	W0308 00:38:30.955254    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955254    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955653    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955653    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:38:30.956850    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:38:30.956850    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:30.960821    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:38:30.960821    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:32.942554    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:32.942792    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:35.301201    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:35.307244    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:35.307244    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:35.319893    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:35.325168    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:35.325461    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:35.509221    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:38:35.510086    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5531934s)
	I0308 00:38:35.510160    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0308 00:38:35.510160    8176 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5492966s)
	W0308 00:38:35.510160    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:38:35.522567    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:38:35.541070    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:38:35.546072    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:38:35.546105    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:38:35.546268    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:38:35.574424    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:38:35.587019    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:38:35.616742    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:38:35.634361    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:38:35.644270    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:38:35.683700    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:38:35.712918    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:38:35.741859    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:38:35.769916    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:38:35.804682    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:38:35.833964    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:38:35.836875    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:38:35.861153    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:38:35.894963    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:36.087567    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:38:36.116454    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:38:36.130495    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:38:36.151821    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Unit]
	I0308 00:38:36.151821    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:38:36.151821    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:38:36.151821    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:38:36.151821    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:38:36.151821    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:38:36.151821    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Service]
	I0308 00:38:36.151821    8176 command_runner.go:130] > Type=notify
	I0308 00:38:36.151821    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:38:36.151821    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151
	I0308 00:38:36.151821    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151,172.20.50.67
	I0308 00:38:36.151821    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:38:36.151821    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:38:36.151821    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:38:36.151821    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:38:36.151821    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:38:36.151821    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:38:36.151821    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecStart=
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:38:36.151821    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:38:36.151821    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:38:36.151821    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:38:36.151821    8176 command_runner.go:130] > Delegate=yes
	I0308 00:38:36.151821    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:38:36.151821    8176 command_runner.go:130] > KillMode=process
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Install]
	I0308 00:38:36.151821    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:38:36.162943    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:38:36.195879    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:38:36.226987    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:38:36.260137    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:38:36.290752    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:38:36.363249    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:38:36.383692    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:38:36.412595    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:38:36.423699    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:38:36.429893    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:38:36.439866    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:38:36.457624    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:38:36.493586    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:38:36.649200    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:38:36.795818    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:38:36.795905    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:38:36.834893    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:36.999557    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:38:38.578421    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5788498s)
	I0308 00:38:38.588933    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:38:38.619856    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:38:38.650443    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:38:38.819227    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:38:38.979514    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:39.160602    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:38:39.210867    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:38:39.244985    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:39.421568    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:38:39.507838    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:38:39.520401    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:38:39.530379    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:38:39.530379    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:38:39.530379    8176 command_runner.go:130] > Device: 0,22	Inode: 862         Links: 1
	I0308 00:38:39.530379    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:38:39.530379    8176 command_runner.go:130] > Access: 2024-03-08 00:38:39.456636032 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] > Modify: 2024-03-08 00:38:39.456636032 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] > Change: 2024-03-08 00:38:39.459636053 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] >  Birth: -
	I0308 00:38:39.531953    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:38:39.541992    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:38:39.548555    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:38:39.558585    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:38:39.624546    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:38:39.626329    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:38:39.634356    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:38:39.661502    8176 command_runner.go:130] > 24.0.7
	I0308 00:38:39.671048    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:38:39.699490    8176 command_runner.go:130] > 24.0.7
	I0308 00:38:39.703939    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:38:39.707502    8176 out.go:177]   - env NO_PROXY=172.20.61.151
	I0308 00:38:39.709851    8176 out.go:177]   - env NO_PROXY=172.20.61.151,172.20.50.67
	I0308 00:38:39.711175    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:38:39.715783    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:38:39.715783    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:38:39.731062    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:38:39.736154    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:38:39.754341    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:38:39.755024    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:39.755331    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:41.629166    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:41.629166    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:41.635956    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:41.636634    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.53.127
	I0308 00:38:41.636634    8176 certs.go:194] generating shared ca certs ...
	I0308 00:38:41.636802    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:38:41.636849    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:38:41.637656    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:38:41.637953    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:38:41.638260    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:38:41.638483    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:38:41.638698    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:38:41.639039    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:38:41.639039    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:38:41.639615    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:38:41.640782    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.641561    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:38:41.688344    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:38:41.728697    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:38:41.768518    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:38:41.811304    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:38:41.850024    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:38:41.889072    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:38:41.944033    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:38:41.951478    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:38:41.961195    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:38:41.991868    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.994064    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.994064    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.999821    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:38:42.010750    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:38:42.026007    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:38:42.056713    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:38:42.085516    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.093101    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.093101    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.104757    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.112321    8176 command_runner.go:130] > b5213941
	I0308 00:38:42.122694    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:38:42.151961    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:38:42.181972    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.184088    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.184088    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.198369    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.201513    8176 command_runner.go:130] > 51391683
	I0308 00:38:42.207168    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:38:42.243610    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:38:42.245901    8176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:38:42.249343    8176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:38:42.249537    8176 kubeadm.go:928] updating node {m03 172.20.53.127 0 v1.28.4  false true} ...
	I0308 00:38:42.249843    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.53.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:38:42.259227    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubeadm
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubectl
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubelet
	I0308 00:38:42.280711    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:38:42.291438    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0308 00:38:42.309265    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0308 00:38:42.335838    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:38:42.373710    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:38:42.379924    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:38:42.408877    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:42.577589    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:38:42.604547    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:42.604861    8176 start.go:316] joinCluster: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:38:42.605447    8176 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:42.605617    8176 host.go:66] Checking if "multinode-397400-m03" exists ...
	I0308 00:38:42.606270    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:38:42.606763    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:42.607503    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:44.546577    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:44.546577    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:44.546577    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:44.556974    8176 api_server.go:166] Checking apiserver status ...
	I0308 00:38:44.567565    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:38:44.567565    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:48.706501    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:38:48.715976    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:48.716151    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:38:48.821797    8176 command_runner.go:130] > 1978
	I0308 00:38:48.821797    8176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2541911s)
	I0308 00:38:48.839262    8176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup
	W0308 00:38:48.856353    8176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:38:48.868331    8176 ssh_runner.go:195] Run: ls
	I0308 00:38:48.874644    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:38:48.882227    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:38:48.897567    8176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0308 00:38:49.030970    8176 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-srl7h, kube-system/kube-proxy-ktnrd
	I0308 00:38:49.042258    8176 command_runner.go:130] > node/multinode-397400-m03 cordoned
	I0308 00:38:49.044411    8176 command_runner.go:130] > node/multinode-397400-m03 drained
	I0308 00:38:49.044553    8176 node.go:125] successfully drained node "multinode-397400-m03"
	I0308 00:38:49.044553    8176 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0308 00:38:49.044553    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:50.947531    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:50.947531    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:50.957846    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:53.179365    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:53.179365    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:53.190497    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:53.630212    8176 command_runner.go:130] ! W0308 00:38:53.645380    1473 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0308 00:38:54.012245    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Stopping the kubelet service
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0308 00:38:54.012245    8176 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0308 00:38:54.012245    8176 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0308 00:38:54.012245    8176 command_runner.go:130] > to reset your system's IPVS tables.
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0308 00:38:54.012245    8176 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0308 00:38:54.012245    8176 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.967645s)
	I0308 00:38:54.012245    8176 node.go:152] successfully reset node "multinode-397400-m03"
	I0308 00:38:54.013808    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:38:54.014381    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:38:54.015400    8176 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0308 00:38:54.015485    8176 round_trippers.go:463] DELETE https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:38:54.015485    8176 round_trippers.go:469] Request Headers:
	I0308 00:38:54.015485    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:38:54.015485    8176 round_trippers.go:473]     Content-Type: application/json
	I0308 00:38:54.015485    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:38:54.026082    8176 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0308 00:38:54.033974    8176 round_trippers.go:577] Response Headers:
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:38:54.033974    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:38:54.033974    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Content-Length: 171
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:38:54 GMT
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Audit-Id: 4934f935-a258-48b1-960f-184d3168e43d
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:38:54.033974    8176 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-397400-m03","kind":"nodes","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e"}}
	I0308 00:38:54.033974    8176 node.go:173] successfully deleted node "multinode-397400-m03"
	I0308 00:38:54.033974    8176 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:54.033974    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 00:38:54.033974    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:55.904898    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:55.905110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:55.905173    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:58.135211    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:38:58.136861    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:58.137333    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:38:58.314671    8176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:38:58.314766    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2807208s)
	I0308 00:38:58.314766    8176 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:58.314766    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m03"
	I0308 00:38:58.538177    8176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:39:01.315807    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:39:01.315950    8176 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0308 00:39:01.315950    8176 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0308 00:39:01.316055    8176 command_runner.go:130] > This node has joined the cluster:
	I0308 00:39:01.316055    8176 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0308 00:39:01.316055    8176 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0308 00:39:01.316098    8176 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0308 00:39:01.316098    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m03": (3.0013042s)
	I0308 00:39:01.316186    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 00:39:01.492435    8176 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0308 00:39:01.668894    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400-m03 minikube.k8s.io/updated_at=2024_03_08T00_39_01_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=false
	I0308 00:39:01.791222    8176 command_runner.go:130] > node/multinode-397400-m03 labeled
	I0308 00:39:01.791318    8176 start.go:318] duration metric: took 19.1862767s to joinCluster
	I0308 00:39:01.791452    8176 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:39:01.794049    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:39:01.791584    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:39:01.806689    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:39:02.008686    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:39:02.036777    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:39:02.038167    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:39:02.039185    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400-m03" to be "Ready" ...
	I0308 00:39:02.039721    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:02.039721    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:02.039721    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:02.039721    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:02.039969    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:02.039969    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:02.039969    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:02.039969    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:02 GMT
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Audit-Id: 23174ea5-7c67-46fc-aea5-83801f390d38
	I0308 00:39:02.044369    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:02.542926    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:02.542978    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:02.543121    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:02.543121    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:02.547561    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:39:02.548568    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:02.548608    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:02.548608    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:02 GMT
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Audit-Id: 8d89c28e-be85-417d-8a7c-6df46ed7fce1
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:02.548608    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:03.064069    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:03.064150    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:03.064182    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:03.064182    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:03.069919    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:39:03.070960    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Audit-Id: 532b04f0-54db-4375-a964-70ca3487190f
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:03.071012    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:03.071012    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:03.071065    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:03 GMT
	I0308 00:39:03.071111    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:03.543916    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:03.543916    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:03.543916    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:03.543916    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:03.544294    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:03.548677    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:03.548677    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:03.548677    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:03.548677    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:03 GMT
	I0308 00:39:03.548677    8176 round_trippers.go:580]     Audit-Id: 37b1f60b-908d-4dca-9bfd-3a29c979e3a1
	I0308 00:39:03.548779    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:03.548779    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:03.548905    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:04.047534    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:04.047534    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:04.047534    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:04.047626    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:04.049485    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:04.049485    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:04 GMT
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Audit-Id: 198cd6d3-8186-4f02-b63f-a7a36ad9901c
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:04.049485    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:04.051958    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:04.052101    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:04.052822    8176 node_ready.go:53] node "multinode-397400-m03" has status "Ready":"False"
	I0308 00:39:04.540796    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:04.541010    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:04.541010    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:04.541010    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:04.542904    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:04.542904    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:04.542904    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:04 GMT
	I0308 00:39:04.542904    8176 round_trippers.go:580]     Audit-Id: c85d7f40-d29a-407a-8a49-b1cc1ac7229e
	I0308 00:39:04.544328    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:04.544328    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:04.544328    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:04.544328    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:04.544520    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2089","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3628 chars]
	I0308 00:39:05.040045    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.040108    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.040108    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.040189    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.040996    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.040996    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.044118    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.044118    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.044118    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Audit-Id: 721d6ef2-096f-4b46-b530-1fef4408d295
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.044327    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2093","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0308 00:39:05.044573    8176 node_ready.go:49] node "multinode-397400-m03" has status "Ready":"True"
	I0308 00:39:05.044573    8176 node_ready.go:38] duration metric: took 3.0053597s for node "multinode-397400-m03" to be "Ready" ...
	I0308 00:39:05.044573    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:39:05.044573    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:39:05.044573    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.045153    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.045153    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.045321    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.045321    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.045321    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Audit-Id: 682b4eec-c630-43c5-b06f-3b8add619111
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.045321    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.050626    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2094"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82099 chars]
	I0308 00:39:05.054433    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.054433    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:39:05.054433    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.054433    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.054433    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.055194    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.055194    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.055194    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.055194    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Audit-Id: 5d4c0bb1-f372-4287-8627-8d1d9a186415
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.058437    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:39:05.059874    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.059942    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.059942    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.059942    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.062776    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.062776    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Audit-Id: 155c000d-e85f-45b3-bbba-09eff4673bc8
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.062776    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.062776    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.063463    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.063718    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.063718    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.063718    8176 pod_ready.go:81] duration metric: took 9.2852ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.063718    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.064270    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:39:05.064270    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.064270    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.064270    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.070662    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:39:05.070785    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.070867    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.070867    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Audit-Id: 12133602-7b42-4f1c-bf0f-be7c93cf2f1f
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.071422    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:39:05.071602    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.071602    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.071602    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.071602    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.074798    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:39:05.074798    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Audit-Id: 64baf02b-e56c-4067-9d8c-55fd6578aee6
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.075478    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.075478    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.076721    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.077669    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.077669    8176 pod_ready.go:81] duration metric: took 13.9505ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.077669    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.078431    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:39:05.078483    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.078533    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.078533    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.083537    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:39:05.083537    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Audit-Id: b6943add-398b-4593-964d-980a161be401
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.083537    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.083537    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.083537    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:39:05.084145    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.084145    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.084145    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.084145    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.087347    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.087347    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.087347    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Audit-Id: 1f0b5b2f-8ab9-412e-bec9-7c0e3d9d6cd9
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.087347    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.087347    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.088020    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.088020    8176 pod_ready.go:81] duration metric: took 10.3511ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.088020    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.088020    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:39:05.088020    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.088020    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.088020    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.088633    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.088633    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.088633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.088633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Audit-Id: ae541466-7775-464c-9ce9-d7a996300698
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.092213    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:39:05.092917    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.092917    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.092917    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.092917    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.094201    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:05.094201    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Audit-Id: b2a01503-a08c-4bd2-8755-820705eee29d
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.094201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.094201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.096855    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.097198    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.097198    8176 pod_ready.go:81] duration metric: took 9.1777ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.097198    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.250441    8176 request.go:629] Waited for 153.1094ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:39:05.250561    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:39:05.250561    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.250561    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.250776    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.251486    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.251486    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Audit-Id: d9cba37b-2be3-4416-8aab-9394138986bc
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.253767    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.253767    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.253767    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.253939    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"1907","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5538 chars]
	I0308 00:39:05.453024    8176 request.go:629] Waited for 198.8725ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:39:05.453109    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:39:05.453109    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.453109    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.453109    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.453472    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.457565    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.457565    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.457565    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Audit-Id: 5d00296c-7cc7-437d-babd-9f162725960d
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.457834    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1928","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0308 00:39:05.458189    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.458274    8176 pod_ready.go:81] duration metric: took 361.0733ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.458274    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.646512    8176 request.go:629] Waited for 188.0842ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:39:05.646512    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:39:05.646512    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.646512    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.646512    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.649201    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.649201    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.649201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.649201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Audit-Id: 7627af74-c76b-4918-9644-67af9a175448
	I0308 00:39:05.650436    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"2080","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5542 chars]
	I0308 00:39:05.847000    8176 request.go:629] Waited for 195.6887ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.847165    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.847165    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.847165    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.847165    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.847466    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.847466    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Audit-Id: d22e1496-65ac-4eb0-a128-3e7300ddb930
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.850593    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.850593    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.850752    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2093","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0308 00:39:05.850752    8176 pod_ready.go:92] pod "kube-proxy-ktnrd" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.850752    8176 pod_ready.go:81] duration metric: took 392.4739ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.850752    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.055802    8176 request.go:629] Waited for 204.8047ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:39:06.055802    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:39:06.055802    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.055802    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.055802    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.056556    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.056556    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Audit-Id: e1c9f332-034b-48d0-91f5-239a75f84518
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.059538    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.059538    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.059903    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:39:06.243796    8176 request.go:629] Waited for 183.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.243899    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.243899    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.243899    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.243899    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.244634    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.247744    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Audit-Id: befb9d79-5a49-4569-ba8e-cc8b676dc19c
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.247810    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.247810    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.247862    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:06.248542    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:06.248622    8176 pod_ready.go:81] duration metric: took 397.8662ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.248622    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.452531    8176 request.go:629] Waited for 203.6749ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:39:06.452735    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:39:06.452868    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.452868    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.452868    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.453157    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.455906    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Audit-Id: 315d9ecb-5318-47b0-99c7-edd9e310ec3a
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.455906    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.455906    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.456159    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:39:06.653600    8176 request.go:629] Waited for 196.8943ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.653600    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.653600    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.653600    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.653600    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.654532    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.654532    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.654532    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.654532    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Audit-Id: d1704fb1-0342-4ade-85f4-57c7510d846d
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.657372    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:06.657517    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:06.657517    8176 pod_ready.go:81] duration metric: took 408.8915ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.657517    8176 pod_ready.go:38] duration metric: took 1.6129286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:39:06.657517    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:39:06.667961    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:39:06.690680    8176 system_svc.go:56] duration metric: took 33.1621ms WaitForService to wait for kubelet
	I0308 00:39:06.690680    8176 kubeadm.go:576] duration metric: took 4.899087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:39:06.690680    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:39:06.847861    8176 request.go:629] Waited for 156.9418ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:39:06.847861    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:39:06.848014    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.848014    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.848014    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.848350    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.848350    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.852343    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.852343    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Audit-Id: c7b16980-97ec-44fa-b493-715e62ea0e49
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.853347    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2095"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14849 chars]
	I0308 00:39:06.853631    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:105] duration metric: took 163.478ms to run NodePressure ...
	I0308 00:39:06.854159    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:39:06.854292    8176 start.go:254] writing updated cluster config ...
	I0308 00:39:06.866206    8176 ssh_runner.go:195] Run: rm -f paused
	I0308 00:39:06.994865    8176 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 00:39:07.001960    8176 out.go:177] * Done! kubectl is now configured to use "multinode-397400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.369497695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.369516495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.370214098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374438817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374570917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374791918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.375162420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 cri-dockerd[1249]: time="2024-03-08T00:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd3961aae453d674fbd9879978f2edf781424559bee763553ecc0b5480320532/resolv.conf as [nameserver 172.20.48.1]"
	Mar 08 00:34:40 multinode-397400 cri-dockerd[1249]: time="2024-03-08T00:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d97a9e240282efa34aeaa8b7d8b28489a577c9159a13eed18fd34ff81cf6b847/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835757400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835865801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835882501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835975901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912371346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912564047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912724548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.913086850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:35:03 multinode-397400 dockerd[1035]: time="2024-03-08T00:35:03.751092590Z" level=info msg="ignoring event" container=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.752938199Z" level=info msg="shim disconnected" id=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 namespace=moby
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.753088400Z" level=warning msg="cleaning up after shim disconnected" id=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 namespace=moby
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.753099500Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412792964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412855364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412867664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.413080865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45f94fda9ca26       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   d45a9b335323c       storage-provisioner
	0c3e8474c735a       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   d97a9e240282e       busybox-5b5d89c9d6-j7ck4
	58f69bbde10c9       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   bd3961aae453d       coredns-5dd5756b68-w4hzh
	9dacbf05ab6e1       4950bb10b3f87                                                                                         4 minutes ago       Running             kindnet-cni               1                   a3a9d8e6a117e       kindnet-wkwtm
	31baaa0408128       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   d45a9b335323c       storage-provisioner
	e7bc69da51949       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                1                   f639fb3711ca7       kube-proxy-nt8td
	2bc9651e0b360       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   45c6fc79a1b4d       etcd-multinode-397400
	3947d85995668       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            1                   6436a4df84b2c       kube-scheduler-multinode-397400
	ddd59e5b2501e       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   df28fa2acee46       kube-apiserver-multinode-397400
	df7b64a1988a8       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   1                   b272848c66a23       kube-controller-manager-multinode-397400
	ce9a9bc4cfe37       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   cdb14ba552809       busybox-5b5d89c9d6-j7ck4
	b8903699a2e38       ead0a4a53df89                                                                                         25 minutes ago      Exited              coredns                   0                   13e6ea5ce4bdc       coredns-5dd5756b68-w4hzh
	91ada1ebb521d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Exited              kindnet-cni               0                   90ba9a9d99a3d       kindnet-wkwtm
	79433b5ca644a       83f6cc407eed8                                                                                         25 minutes ago      Exited              kube-proxy                0                   9c957cee5d35c       kube-proxy-nt8td
	0aaf57b801fb8       e3db313c6dbc0                                                                                         25 minutes ago      Exited              kube-scheduler            0                   d4b57713d4316       kube-scheduler-multinode-397400
	4f8851b134589       d058aa5ab969c                                                                                         25 minutes ago      Exited              kube-controller-manager   0                   ead2ed31c6b3d       kube-controller-manager-multinode-397400
	
	
	==> coredns [58f69bbde10c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44776 - 53642 "HINFO IN 4310211516712145791.863761266172721005. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.145987054s
	
	
	==> coredns [b8903699a2e3] <==
	[INFO] 10.244.0.3:34101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146601s
	[INFO] 10.244.0.3:39343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125001s
	[INFO] 10.244.0.3:51579 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202401s
	[INFO] 10.244.0.3:34574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000234402s
	[INFO] 10.244.0.3:41474 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161301s
	[INFO] 10.244.0.3:56490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117701s
	[INFO] 10.244.0.3:47237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125501s
	[INFO] 10.244.1.2:57949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186801s
	[INFO] 10.244.1.2:51978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082601s
	[INFO] 10.244.1.2:53464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123401s
	[INFO] 10.244.1.2:60851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124401s
	[INFO] 10.244.0.3:47849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000966s
	[INFO] 10.244.0.3:33374 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329903s
	[INFO] 10.244.0.3:33498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231301s
	[INFO] 10.244.0.3:49302 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158701s
	[INFO] 10.244.1.2:57262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157901s
	[INFO] 10.244.1.2:56667 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185301s
	[INFO] 10.244.1.2:47521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193002s
	[INFO] 10.244.1.2:51329 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000258401s
	[INFO] 10.244.0.3:49110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166601s
	[INFO] 10.244.0.3:55134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128401s
	[INFO] 10.244.0.3:43988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051301s
	[INFO] 10.244.0.3:49870 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000082101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-397400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T00_13_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:13:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:39:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:34:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:34:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:34:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:34:36 +0000   Fri, 08 Mar 2024 00:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.61.151
	  Hostname:    multinode-397400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58bfd6541cf46d6b45a73ca4f8c85e6
	  System UUID:                8391dbcb-b4b7-5845-b9ff-a5eba8cddcb5
	  Boot ID:                    9b542d52-a0e2-458a-8d24-b3ad596c9f52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-j7ck4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-w4hzh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-multinode-397400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-wkwtm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-397400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-multinode-397400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-nt8td                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-397400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)      kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)      kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x7 over 25m)      kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    25m                    kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                    kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     25m                    kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           25m                    node-controller  Node multinode-397400 event: Registered Node multinode-397400 in Controller
	  Normal  NodeReady                25m                    kubelet          Node multinode-397400 status is now: NodeReady
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m41s                  node-controller  Node multinode-397400 event: Registered Node multinode-397400 in Controller
	
	
	Name:               multinode-397400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T00_36_52_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:39:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.50.67
	  Hostname:    multinode-397400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 28dd8cb4d1cf408a8d14fae89f734da5
	  System UUID:                12e9ba38-a8d8-e14f-9556-c9cd17fe7785
	  Boot ID:                    23f89f6e-fbed-4b79-bf6a-26ee3d3f8c37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-84btt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kindnet-jvzwq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-proxy-gw9w9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  Starting                 2m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x5 over 22m)      kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x5 over 22m)      kubelet          Node multinode-397400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x5 over 22m)      kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                    kubelet          Node multinode-397400-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m33s (x5 over 2m35s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s (x5 over 2m35s)  kubelet          Node multinode-397400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s (x5 over 2m35s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m31s                  node-controller  Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller
	  Normal  NodeReady                2m28s                  kubelet          Node multinode-397400-m02 status is now: NodeReady
	
	
	Name:               multinode-397400-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T00_39_01_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:39:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:39:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:39:04 +0000   Fri, 08 Mar 2024 00:39:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:39:04 +0000   Fri, 08 Mar 2024 00:39:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:39:04 +0000   Fri, 08 Mar 2024 00:39:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:39:04 +0000   Fri, 08 Mar 2024 00:39:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.53.127
	  Hostname:    multinode-397400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 2676a5dfdd344c58a2ba947aec0ef044
	  System UUID:                3329c691-5f85-c647-9864-5ba23a70649d
	  Boot ID:                    daa84229-ec22-4c04-ba08-1632fb430db4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-srl7h       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-ktnrd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m53s                  kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 22s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x5 over 18m)      kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x5 over 18m)      kubelet          Node multinode-397400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x5 over 18m)      kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-397400-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    8m56s (x2 over 8m56s)  kubelet          Node multinode-397400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x2 over 8m56s)  kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m56s (x2 over 8m56s)  kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m56s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m50s                  kubelet          Node multinode-397400-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  24s (x5 over 26s)      kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x5 over 26s)      kubelet          Node multinode-397400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x5 over 26s)      kubelet          Node multinode-397400-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                    node-controller  Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller
	  Normal  NodeReady                21s                    kubelet          Node multinode-397400-m03 status is now: NodeReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[Mar 8 00:33] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.234588] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.913526] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +6.040163] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.253272] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.137491] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Mar 8 00:34] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.089400] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.473836] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.146970] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.171341] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +1.880514] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.157597] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.158418] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.229010] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.767976] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +3.619826] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.087527] kauditd_printk_skb: 227 callbacks suppressed
	[  +7.009284] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.570286] systemd-fstab-generator[3205]: Ignoring "noauto" option for root device
	[  +0.132700] kauditd_printk_skb: 48 callbacks suppressed
	[Mar 8 00:35] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [2bc9651e0b36] <==
	{"level":"info","ts":"2024-03-08T00:34:28.177531Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"701237fd4f62c309","initial-advertise-peer-urls":["https://172.20.61.151:2380"],"listen-peer-urls":["https://172.20.61.151:2380"],"advertise-client-urls":["https://172.20.61.151:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.61.151:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T00:34:28.177621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T00:34:28.247261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 switched to configuration voters=(8075578642926846729)"}
	{"level":"info","ts":"2024-03-08T00:34:28.24743Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e4eb1942c73643","local-member-id":"701237fd4f62c309","added-peer-id":"701237fd4f62c309","added-peer-peer-urls":["https://172.20.48.212:2380"]}
	{"level":"info","ts":"2024-03-08T00:34:28.248025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e4eb1942c73643","local-member-id":"701237fd4f62c309","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T00:34:28.24806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T00:34:28.24817Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.61.151:2380"}
	{"level":"info","ts":"2024-03-08T00:34:28.2482Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.61.151:2380"}
	{"level":"info","ts":"2024-03-08T00:34:28.251528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:28.251814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:28.252031Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:29.921158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 received MsgPreVoteResp from 701237fd4f62c309 at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 received MsgVoteResp from 701237fd4f62c309 at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 701237fd4f62c309 elected leader 701237fd4f62c309 at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.926172Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"701237fd4f62c309","local-member-attributes":"{Name:multinode-397400 ClientURLs:[https://172.20.61.151:2379]}","request-path":"/0/members/701237fd4f62c309/attributes","cluster-id":"1e4eb1942c73643","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T00:34:29.926197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T00:34:29.926519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T00:34:29.928045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T00:34:29.927597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.61.151:2379"}
	{"level":"info","ts":"2024-03-08T00:34:29.928924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T00:34:29.929148Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:39:25 up 6 min,  0 users,  load average: 0.54, 0.56, 0.29
	Linux multinode-397400 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [91ada1ebb521] <==
	I0308 00:31:32.130125       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:31:42.144211       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:31:42.144319       1 main.go:227] handling current node
	I0308 00:31:42.144332       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:31:42.144342       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:31:42.144702       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:31:42.144780       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:31:52.150046       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:31:52.150087       1 main.go:227] handling current node
	I0308 00:31:52.150099       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:31:52.150107       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:31:52.150747       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:31:52.150953       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:32:02.471314       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:32:02.471359       1 main.go:227] handling current node
	I0308 00:32:02.471430       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:32:02.471457       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:32:02.471613       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:32:02.471646       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:32:12.479491       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:32:12.480248       1 main.go:227] handling current node
	I0308 00:32:12.480323       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:32:12.480354       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:32:12.480646       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:32:12.480864       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9dacbf05ab6e] <==
	I0308 00:38:44.777434       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:38:44.777497       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:38:54.787797       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:38:54.787932       1 main.go:227] handling current node
	I0308 00:38:54.787947       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:38:54.787955       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:39:04.798964       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:39:04.799066       1 main.go:227] handling current node
	I0308 00:39:04.799079       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:39:04.799089       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:39:04.799571       1 main.go:223] Handling node with IPs: map[172.20.53.127:{}]
	I0308 00:39:04.799605       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.2.0/24] 
	I0308 00:39:04.799668       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.20.53.127 Flags: [] Table: 0} 
	I0308 00:39:14.805476       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:39:14.805895       1 main.go:227] handling current node
	I0308 00:39:14.806003       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:39:14.806015       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:39:14.806156       1 main.go:223] Handling node with IPs: map[172.20.53.127:{}]
	I0308 00:39:14.806344       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.2.0/24] 
	I0308 00:39:24.813980       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:39:24.814131       1 main.go:227] handling current node
	I0308 00:39:24.814163       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:39:24.814171       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:39:24.814885       1 main.go:223] Handling node with IPs: map[172.20.53.127:{}]
	I0308 00:39:24.814999       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [ddd59e5b2501] <==
	I0308 00:34:31.379349       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0308 00:34:31.380093       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0308 00:34:31.380256       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 00:34:31.419934       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 00:34:31.421611       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 00:34:31.422873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 00:34:31.425124       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 00:34:31.425221       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 00:34:31.425322       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 00:34:31.425509       1 aggregator.go:166] initial CRD sync complete...
	I0308 00:34:31.425578       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 00:34:31.425586       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 00:34:31.425592       1 cache.go:39] Caches are synced for autoregister controller
	I0308 00:34:31.426446       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 00:34:31.468358       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 00:34:31.487371       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 00:34:32.336480       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 00:34:32.871557       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.20.61.151]
	I0308 00:34:32.872892       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 00:34:32.885117       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 00:34:34.720003       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 00:34:34.896027       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 00:34:34.909366       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 00:34:35.017904       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 00:34:35.038760       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4f8851b13458] <==
	I0308 00:17:20.176000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.976786ms"
	I0308 00:17:20.176273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.1µs"
	I0308 00:20:50.158570       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:20:50.159696       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:20:50.183629       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ktnrd"
	I0308 00:20:50.183663       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-srl7h"
	I0308 00:20:50.194174       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.2.0/24"]
	I0308 00:20:51.432910       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-397400-m03"
	I0308 00:20:51.432983       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:21:07.481594       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:28:11.562720       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:28:11.563273       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-397400-m03 status is now: NodeNotReady"
	I0308 00:28:11.585531       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ktnrd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:28:11.603986       1 event.go:307] "Event occurred" object="kube-system/kindnet-srl7h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:30:24.270272       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:30:26.631888       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m03 event: Removing Node multinode-397400-m03 from Controller"
	I0308 00:30:29.668520       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:30:29.669558       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:30:29.679555       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.3.0/24"]
	I0308 00:30:31.632782       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:30:35.024823       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:32:01.715054       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:32:01.716052       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-397400-m03 status is now: NodeNotReady"
	I0308 00:32:02.082918       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ktnrd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:32:02.470368       1 event.go:307] "Event occurred" object="kube-system/kindnet-srl7h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [df7b64a1988a] <==
	I0308 00:36:37.444457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.560335ms"
	I0308 00:36:37.444752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.7µs"
	I0308 00:36:49.137536       1 event.go:307] "Event occurred" object="multinode-397400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m02 event: Removing Node multinode-397400-m02 from Controller"
	I0308 00:36:52.268067       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m02\" does not exist"
	I0308 00:36:52.272363       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ctt42" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ctt42"
	I0308 00:36:52.284091       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m02" podCIDRs=["10.244.1.0/24"]
	I0308 00:36:52.705806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.4µs"
	I0308 00:36:54.138570       1 event.go:307] "Event occurred" object="multinode-397400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller"
	I0308 00:36:57.941544       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:36:57.973235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.301µs"
	I0308 00:36:59.162872       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ctt42" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ctt42"
	I0308 00:37:04.792011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.9µs"
	I0308 00:37:04.804939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="176.801µs"
	I0308 00:37:04.825775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.8µs"
	I0308 00:37:04.927524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="106.401µs"
	I0308 00:37:04.936931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44µs"
	I0308 00:37:05.963144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.190865ms"
	I0308 00:37:05.963667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.2µs"
	I0308 00:38:54.049062       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:38:54.186832       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m03 event: Removing Node multinode-397400-m03 from Controller"
	I0308 00:39:01.188836       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:39:01.189397       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:39:01.209039       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.2.0/24"]
	I0308 00:39:04.188687       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:39:04.587445       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	
	
	==> kube-proxy [79433b5ca644] <==
	I0308 00:13:54.006048       1 server_others.go:69] "Using iptables proxy"
	I0308 00:13:54.040499       1 node.go:141] Successfully retrieved node IP: 172.20.48.212
	I0308 00:13:54.095908       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 00:13:54.096005       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 00:13:54.101982       1 server_others.go:152] "Using iptables Proxier"
	I0308 00:13:54.102091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 00:13:54.102846       1 server.go:846] "Version info" version="v1.28.4"
	I0308 00:13:54.102861       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:13:54.104235       1 config.go:315] "Starting node config controller"
	I0308 00:13:54.104569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 00:13:54.105241       1 config.go:97] "Starting endpoint slice config controller"
	I0308 00:13:54.106017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 00:13:54.106286       1 config.go:188] "Starting service config controller"
	I0308 00:13:54.106444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 00:13:54.205614       1 shared_informer.go:318] Caches are synced for node config
	I0308 00:13:54.206939       1 shared_informer.go:318] Caches are synced for service config
	I0308 00:13:54.206988       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e7bc69da5194] <==
	I0308 00:34:33.859531       1 server_others.go:69] "Using iptables proxy"
	I0308 00:34:33.939662       1 node.go:141] Successfully retrieved node IP: 172.20.61.151
	I0308 00:34:34.048460       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 00:34:34.048502       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 00:34:34.058077       1 server_others.go:152] "Using iptables Proxier"
	I0308 00:34:34.059355       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 00:34:34.060795       1 server.go:846] "Version info" version="v1.28.4"
	I0308 00:34:34.060831       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:34:34.068894       1 config.go:188] "Starting service config controller"
	I0308 00:34:34.070316       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 00:34:34.070384       1 config.go:97] "Starting endpoint slice config controller"
	I0308 00:34:34.070519       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 00:34:34.074000       1 config.go:315] "Starting node config controller"
	I0308 00:34:34.074036       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 00:34:34.171337       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 00:34:34.171644       1 shared_informer.go:318] Caches are synced for service config
	I0308 00:34:34.174768       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0aaf57b801fb] <==
	E0308 00:13:36.477702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 00:13:36.525082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 00:13:36.525124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 00:13:36.600953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 00:13:36.601042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 00:13:36.636085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 00:13:36.636109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 00:13:36.684531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 00:13:36.684579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 00:13:36.716028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.716307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.848521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 00:13:36.848602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 00:13:36.900721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 00:13:36.900908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 00:13:36.942519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 00:13:36.942753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 00:13:36.951164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.951329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.977745       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 00:13:36.977888       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0308 00:13:39.884202       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 00:32:17.869313       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 00:32:17.869458       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0308 00:32:17.869692       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3947d8599566] <==
	I0308 00:34:29.069311       1 serving.go:348] Generated self-signed cert in-memory
	W0308 00:34:31.393552       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 00:34:31.393586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 00:34:31.393596       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 00:34:31.393602       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 00:34:31.421426       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 00:34:31.421446       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:34:31.424864       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 00:34:31.425239       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 00:34:31.426003       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 00:34:31.427938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 00:34:31.526392       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 00:35:04 multinode-397400 kubelet[1511]: I0308 00:35:04.027192    1511 scope.go:117] "RemoveContainer" containerID="31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58"
	Mar 08 00:35:04 multinode-397400 kubelet[1511]: E0308 00:35:04.027535    1511 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81b55677-743c-4d2f-b04f-95928d4a3868)\"" pod="kube-system/storage-provisioner" podUID="81b55677-743c-4d2f-b04f-95928d4a3868"
	Mar 08 00:35:19 multinode-397400 kubelet[1511]: I0308 00:35:19.236438    1511 scope.go:117] "RemoveContainer" containerID="31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58"
	Mar 08 00:35:26 multinode-397400 kubelet[1511]: I0308 00:35:26.234646    1511 scope.go:117] "RemoveContainer" containerID="23ccdb1fc3b5363ba68cec77b78cd8cfbf75f44d8e6690b8fc1733389471c6d2"
	Mar 08 00:35:26 multinode-397400 kubelet[1511]: I0308 00:35:26.276113    1511 scope.go:117] "RemoveContainer" containerID="c0241fd304ad68df4d3ec3efdb5d8ec4a0b37b635afae4f92383607ff98d6fa4"
	Mar 08 00:35:26 multinode-397400 kubelet[1511]: E0308 00:35:26.277646    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:35:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:35:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:35:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:35:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:36:26 multinode-397400 kubelet[1511]: E0308 00:36:26.277114    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:37:26 multinode-397400 kubelet[1511]: E0308 00:37:26.279458    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:38:26 multinode-397400 kubelet[1511]: E0308 00:38:26.279626    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:39:18.146543    5576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400
E0308 00:39:37.390883    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400: (10.7233728s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-397400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (510.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (39.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-397400 stop: exit status 1 (7.8986297s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-397400-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:40:42.061683    9032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-windows-amd64.exe -p multinode-397400 stop": exit status 1
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-397400 status: context deadline exceeded (0s)
multinode_test.go:354: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-397400 status" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-397400 -n multinode-397400: (10.6274056s)
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 logs -n 25: (7.8932729s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:24 UTC |
	|         | multinode-397400:/home/docker/cp-test_multinode-397400-m02_multinode-397400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:24 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400 sudo cat                                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-397400-m02_multinode-397400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m03:/home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400-m03 sudo cat                                                                    | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | /home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp testdata\cp-test.txt                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:25 UTC |
	|         | multinode-397400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:25 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400:/home/docker/cp-test_multinode-397400-m03_multinode-397400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400 sudo cat                                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-397400-m03_multinode-397400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt                                                        | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:26 UTC | 08 Mar 24 00:27 UTC |
	|         | multinode-397400-m02:/home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n                                                                                                  | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	|         | multinode-397400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-397400 ssh -n multinode-397400-m02 sudo cat                                                                    | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	|         | /home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-397400 node stop m03                                                                                           | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:27 UTC | 08 Mar 24 00:27 UTC |
	| node    | multinode-397400 node start                                                                                              | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:28 UTC | 08 Mar 24 00:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-397400                                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:31 UTC |                     |
	| stop    | -p multinode-397400                                                                                                      | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:31 UTC | 08 Mar 24 00:32 UTC |
	| start   | -p multinode-397400                                                                                                      | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:32 UTC | 08 Mar 24 00:39 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-397400                                                                                                 | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:39 UTC |                     |
	| node    | multinode-397400 node delete                                                                                             | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:39 UTC | 08 Mar 24 00:40 UTC |
	|         | m03                                                                                                                      |                  |                   |         |                     |                     |
	| stop    | multinode-397400 stop                                                                                                    | multinode-397400 | minikube7\jenkins | v1.32.0 | 08 Mar 24 00:40 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 00:32:37
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 00:32:37.922575    8176 out.go:291] Setting OutFile to fd 856 ...
	I0308 00:32:37.923670    8176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:32:37.923670    8176 out.go:304] Setting ErrFile to fd 864...
	I0308 00:32:37.923670    8176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:32:37.940351    8176 out.go:298] Setting JSON to false
	I0308 00:32:37.948587    8176 start.go:129] hostinfo: {"hostname":"minikube7","uptime":16912,"bootTime":1709841045,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0308 00:32:37.948587    8176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0308 00:32:37.977819    8176 out.go:177] * [multinode-397400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0308 00:32:38.038558    8176 notify.go:220] Checking for updates...
	I0308 00:32:38.155085    8176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:32:38.293202    8176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 00:32:38.361537    8176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0308 00:32:38.493209    8176 out.go:177]   - MINIKUBE_LOCATION=16214
	I0308 00:32:38.646925    8176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 00:32:38.713059    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:32:38.713153    8176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 00:32:43.604897    8176 out.go:177] * Using the hyperv driver based on existing profile
	I0308 00:32:43.656589    8176 start.go:297] selected driver: hyperv
	I0308 00:32:43.656589    8176 start.go:901] validating driver "hyperv" against &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:32:43.656589    8176 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 00:32:43.705141    8176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:32:43.705141    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:32:43.705141    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:32:43.705141    8176 start.go:340] cluster config:
	{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.48.212 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:32:43.705863    8176 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 00:32:43.800164    8176 out.go:177] * Starting "multinode-397400" primary control-plane node in "multinode-397400" cluster
	I0308 00:32:43.934219    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:32:43.943525    8176 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0308 00:32:43.943630    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:32:43.943783    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:32:43.943783    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:32:43.944375    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:32:43.947511    8176 start.go:360] acquireMachinesLock for multinode-397400: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:32:43.947511    8176 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-397400"
	I0308 00:32:43.948034    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:32:43.948034    8176 fix.go:54] fixHost starting: 
	I0308 00:32:43.948548    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:46.285957    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:32:46.295664    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:46.295664    8176 fix.go:112] recreateIfNeeded on multinode-397400: state=Stopped err=<nil>
	W0308 00:32:46.295834    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:32:46.387487    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400" ...
	I0308 00:32:46.550249    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400
	I0308 00:32:50.753141    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:32:50.756220    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:50.756220    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:32:50.756280    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:52.695071    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:32:54.887593    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:32:54.887593    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:55.900196    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:32:57.841836    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:32:57.844621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:32:57.844621    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:00.050483    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:00.050483    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:01.057910    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:03.042148    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:03.042148    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:03.048515    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:05.301315    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:05.301315    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:06.312529    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:08.216151    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:08.216771    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:08.216836    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:10.457362    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:33:10.457663    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:11.461065    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:13.333521    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:13.344076    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:13.344076    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:15.483825    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:15.493732    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:15.496624    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:17.278328    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:17.278328    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:17.288440    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:19.427467    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:19.427467    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:19.437446    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:33:19.439554    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:33:19.439554    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:21.228418    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:21.228471    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:21.228471    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:23.355186    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:23.355305    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:23.362080    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:23.362716    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:23.362716    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:33:23.491398    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:33:23.491398    8176 buildroot.go:166] provisioning hostname "multinode-397400"
	I0308 00:33:23.491398    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:25.290309    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:25.300470    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:25.300470    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:27.432491    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:27.432491    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:27.437646    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:27.438255    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:27.438255    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400 && echo "multinode-397400" | sudo tee /etc/hostname
	I0308 00:33:27.588273    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400
	
	I0308 00:33:27.588273    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:29.399818    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:29.400611    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:29.400689    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:31.530375    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:31.541143    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:31.545761    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:31.546324    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:31.546324    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:33:31.690358    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:33:31.690415    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:33:31.690415    8176 buildroot.go:174] setting up certificates
	I0308 00:33:31.690415    8176 provision.go:84] configureAuth start
	I0308 00:33:31.690415    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:33.423927    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:33.433716    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:33.433811    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:35.566216    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:35.576436    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:35.576561    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:37.349043    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:37.349196    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:37.349196    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:39.455536    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:39.455621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:39.455621    8176 provision.go:143] copyHostCerts
	I0308 00:33:39.455621    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:33:39.455621    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:33:39.455621    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:33:39.456220    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:33:39.457731    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:33:39.457731    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:33:39.457731    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:33:39.458440    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:33:39.459142    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:33:39.459142    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:33:39.459664    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:33:39.459791    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:33:39.460597    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400 san=[127.0.0.1 172.20.61.151 localhost minikube multinode-397400]
	I0308 00:33:39.570202    8176 provision.go:177] copyRemoteCerts
	I0308 00:33:39.581233    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:33:39.581233    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:41.418642    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:41.429092    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:41.429144    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:43.543732    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:43.543820    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:43.543877    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:33:43.646957    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.0655666s)
	I0308 00:33:43.646957    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:33:43.647433    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0308 00:33:43.683271    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:33:43.683271    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 00:33:43.709023    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:33:43.721242    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:33:43.755935    8176 provision.go:87] duration metric: took 12.0654072s to configureAuth
	I0308 00:33:43.756031    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:33:43.756111    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:33:43.756779    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:45.511498    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:45.511498    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:45.521394    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:47.611109    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:47.611109    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:47.626815    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:47.626815    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:47.626815    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:33:47.772625    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:33:47.772625    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:33:47.772625    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:33:47.772625    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:49.557878    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:49.558061    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:49.558166    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:51.715190    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:51.715190    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:51.720196    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:51.720500    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:51.720500    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:33:51.870543    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:33:51.870613    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:53.616895    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:53.616895    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:53.626040    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:33:55.743110    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:33:55.743110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:55.758414    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:33:55.758414    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:33:55.758414    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:33:57.122646    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:33:57.122731    8176 machine.go:97] duration metric: took 37.6828228s to provisionDockerMachine
	I0308 00:33:57.122731    8176 start.go:293] postStartSetup for "multinode-397400" (driver="hyperv")
	I0308 00:33:57.122790    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:33:57.134981    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:33:57.134981    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:33:58.921440    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:33:58.932707    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:33:58.932707    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:01.097456    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:01.097555    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:01.098005    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:01.201813    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.0667175s)
	I0308 00:34:01.213098    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:34:01.219632    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:34:01.219843    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:34:01.219843    8176 command_runner.go:130] > ID=buildroot
	I0308 00:34:01.219843    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:34:01.219843    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:34:01.219843    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:34:01.220035    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:34:01.220232    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:34:01.221289    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:34:01.221289    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:34:01.230383    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:34:01.246802    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:34:01.285332    8176 start.go:296] duration metric: took 4.1625616s for postStartSetup
	I0308 00:34:01.285455    8176 fix.go:56] duration metric: took 1m17.3366942s for fixHost
	I0308 00:34:01.285575    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:03.060818    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:03.060818    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:03.061056    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:05.190060    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:05.190060    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:05.204399    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:34:05.205187    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:34:05.205187    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:34:05.329715    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858045.344483499
	
	I0308 00:34:05.329715    8176 fix.go:216] guest clock: 1709858045.344483499
	I0308 00:34:05.329715    8176 fix.go:229] Guest: 2024-03-08 00:34:05.344483499 +0000 UTC Remote: 2024-03-08 00:34:01.2854885 +0000 UTC m=+83.527335301 (delta=4.058994999s)
	I0308 00:34:05.329715    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:07.103747    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:07.103897    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:07.104033    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:09.235729    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:09.235729    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:09.242817    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:34:09.243395    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.61.151 22 <nil> <nil>}
	I0308 00:34:09.243395    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858045
	I0308 00:34:09.382751    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:34:05 UTC 2024
	
	I0308 00:34:09.382751    8176 fix.go:236] clock set: Fri Mar  8 00:34:05 UTC 2024
	 (err=<nil>)
	I0308 00:34:09.382751    8176 start.go:83] releasing machines lock for "multinode-397400", held for 1m25.4344373s
	I0308 00:34:09.382751    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:11.146236    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:13.336909    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:13.336909    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:13.348150    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:34:13.348341    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:13.354436    8176 ssh_runner.go:195] Run: cat /version.json
	I0308 00:34:13.354436    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:34:15.248403    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:15.258797    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:15.258895    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:15.270142    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:17.512760    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:17.512760    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:17.512760    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:17.537506    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:34:17.537506    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:17.537506    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:34:17.679025    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:34:17.679919    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.3317286s)
	I0308 00:34:17.679919    8176 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0308 00:34:17.679919    8176 ssh_runner.go:235] Completed: cat /version.json: (4.325443s)
	I0308 00:34:17.689917    8176 ssh_runner.go:195] Run: systemctl --version
	I0308 00:34:17.698365    8176 command_runner.go:130] > systemd 252 (252)
	I0308 00:34:17.698497    8176 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0308 00:34:17.707859    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:34:17.710963    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0308 00:34:17.710963    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:34:17.716653    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:34:17.749040    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:34:17.749144    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:34:17.749204    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:34:17.749435    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:34:17.776322    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:34:17.785683    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:34:17.815765    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:34:17.830326    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:34:17.840256    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:34:17.869029    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:34:17.895075    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:34:17.921271    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:34:17.948781    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:34:17.975675    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:34:18.001988    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:34:18.017558    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:34:18.027600    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:34:18.059131    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:18.229672    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:34:18.257329    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:34:18.269538    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:34:18.290293    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:34:18.290365    8176 command_runner.go:130] > [Unit]
	I0308 00:34:18.290365    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:34:18.290365    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:34:18.290365    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:34:18.290365    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:34:18.290365    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:34:18.290365    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:34:18.290365    8176 command_runner.go:130] > [Service]
	I0308 00:34:18.290365    8176 command_runner.go:130] > Type=notify
	I0308 00:34:18.290365    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:34:18.290486    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:34:18.290486    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:34:18.290544    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:34:18.290544    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:34:18.290591    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:34:18.290591    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:34:18.290591    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:34:18.290672    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:34:18.290672    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:34:18.290672    8176 command_runner.go:130] > ExecStart=
	I0308 00:34:18.290733    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:34:18.290733    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:34:18.290733    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:34:18.290802    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:34:18.290802    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:34:18.290859    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:34:18.290859    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:34:18.290859    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:34:18.290910    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:34:18.290910    8176 command_runner.go:130] > Delegate=yes
	I0308 00:34:18.290910    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:34:18.290910    8176 command_runner.go:130] > KillMode=process
	I0308 00:34:18.290910    8176 command_runner.go:130] > [Install]
	I0308 00:34:18.290966    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:34:18.302551    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:34:18.332913    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:34:18.371916    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:34:18.404693    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:34:18.436130    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:34:18.489704    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:34:18.508796    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:34:18.538978    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:34:18.549101    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:34:18.552340    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:34:18.567191    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:34:18.580746    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:34:18.615682    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:34:18.779942    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:34:18.917784    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:34:18.917784    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:34:18.957542    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:19.119895    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:34:20.749781    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.629693s)
	I0308 00:34:20.761426    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:34:20.794002    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:34:20.825718    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:34:20.986564    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:34:21.141633    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:21.311006    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:34:21.345815    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:34:21.375961    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:21.525964    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:34:21.601972    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:34:21.615849    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:34:21.622715    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:34:21.623274    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:34:21.623310    8176 command_runner.go:130] > Device: 0,22	Inode: 844         Links: 1
	I0308 00:34:21.623310    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:34:21.623372    8176 command_runner.go:130] > Access: 2024-03-08 00:34:21.560107641 +0000
	I0308 00:34:21.623402    8176 command_runner.go:130] > Modify: 2024-03-08 00:34:21.560107641 +0000
	I0308 00:34:21.623430    8176 command_runner.go:130] > Change: 2024-03-08 00:34:21.563107655 +0000
	I0308 00:34:21.623430    8176 command_runner.go:130] >  Birth: -
	I0308 00:34:21.623595    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:34:21.634447    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:34:21.639395    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:34:21.644822    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:34:21.710147    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:34:21.710147    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:34:21.710266    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:34:21.719696    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:34:21.746198    8176 command_runner.go:130] > 24.0.7
	I0308 00:34:21.755767    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:34:21.785463    8176 command_runner.go:130] > 24.0.7
	I0308 00:34:21.789739    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:34:21.790039    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:34:21.794511    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:34:21.797147    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:34:21.797147    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:34:21.805391    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:34:21.808037    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:34:21.831308    8176 kubeadm.go:877] updating cluster {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 00:34:21.831610    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:34:21.839603    8176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:34:21.862087    8176 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0308 00:34:21.863024    8176 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0308 00:34:21.863096    8176 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0308 00:34:21.863127    8176 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0308 00:34:21.863166    8176 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0308 00:34:21.863201    8176 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0308 00:34:21.863201    8176 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:34:21.863201    8176 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0308 00:34:21.863273    8176 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0308 00:34:21.863273    8176 docker.go:615] Images already preloaded, skipping extraction
	I0308 00:34:21.872482    8176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 00:34:21.890235    8176 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0308 00:34:21.890235    8176 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0308 00:34:21.890235    8176 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 00:34:21.890235    8176 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0308 00:34:21.890235    8176 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0308 00:34:21.896126    8176 cache_images.go:84] Images are preloaded, skipping loading
	I0308 00:34:21.896169    8176 kubeadm.go:928] updating node { 172.20.61.151 8443 v1.28.4 docker true true} ...
	I0308 00:34:21.896404    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:34:21.904400    8176 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 00:34:21.936039    8176 command_runner.go:130] > cgroupfs
	I0308 00:34:21.937545    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:34:21.937619    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:34:21.937660    8176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 00:34:21.937692    8176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.61.151 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-397400 NodeName:multinode-397400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 00:34:21.938070    8176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-397400"
	  kubeletExtraArgs:
	    node-ip: 172.20.61.151
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 00:34:21.949317    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:34:21.968593    8176 command_runner.go:130] > kubeadm
	I0308 00:34:21.968632    8176 command_runner.go:130] > kubectl
	I0308 00:34:21.968687    8176 command_runner.go:130] > kubelet
	I0308 00:34:21.968740    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:34:21.978953    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 00:34:21.981940    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0308 00:34:22.023085    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:34:22.051693    8176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0308 00:34:22.089122    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:34:22.096189    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:34:22.130444    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:22.308597    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:34:22.334501    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.61.151
	I0308 00:34:22.334501    8176 certs.go:194] generating shared ca certs ...
	I0308 00:34:22.334576    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.335311    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:34:22.335843    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:34:22.336190    8176 certs.go:256] generating profile certs ...
	I0308 00:34:22.337057    8176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\client.key
	I0308 00:34:22.337270    8176 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808
	I0308 00:34:22.337421    8176 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.61.151]
	I0308 00:34:22.587111    8176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 ...
	I0308 00:34:22.587111    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808: {Name:mk4ff76114cc45ed80b018d6c5c6b8ce527e0f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.590417    8176 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808 ...
	I0308 00:34:22.590417    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808: {Name:mk785c22b94ac52191b29ae5556f426c124b8fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:22.592097    8176 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt.02fc8808 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt
	I0308 00:34:22.597901    8176 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key.02fc8808 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key
	I0308 00:34:22.604903    8176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key
	I0308 00:34:22.604903    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:34:22.606084    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:34:22.606240    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:34:22.606400    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:34:22.606565    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0308 00:34:22.606622    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0308 00:34:22.606905    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0308 00:34:22.607135    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0308 00:34:22.607381    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:34:22.607381    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:34:22.607977    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:34:22.608409    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:34:22.608721    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:34:22.608879    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:34:22.608879    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:34:22.609501    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:22.609837    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:34:22.609837    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:34:22.610739    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:34:22.656857    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:34:22.697992    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:34:22.734810    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:34:22.783697    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0308 00:34:22.823229    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 00:34:22.864862    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 00:34:22.909778    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 00:34:22.949951    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:34:22.987859    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:34:23.023596    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:34:23.065497    8176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 00:34:23.101133    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:34:23.109410    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:34:23.118886    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:34:23.147351    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.150322    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.150322    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.155717    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:34:23.172927    8176 command_runner.go:130] > b5213941
	I0308 00:34:23.184619    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:34:23.212501    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:34:23.239151    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.243661    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.245251    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.255109    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:34:23.257956    8176 command_runner.go:130] > 51391683
	I0308 00:34:23.272509    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:34:23.300626    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:34:23.326819    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.333991    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.334068    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.343448    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:34:23.351995    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:34:23.364707    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:34:23.392844    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:34:23.398920    8176 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:34:23.398920    8176 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0308 00:34:23.398920    8176 command_runner.go:130] > Device: 8,1	Inode: 1053989     Links: 1
	I0308 00:34:23.398920    8176 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 00:34:23.398920    8176 command_runner.go:130] > Access: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.398920    8176 command_runner.go:130] > Modify: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.399097    8176 command_runner.go:130] > Change: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.399097    8176 command_runner.go:130] >  Birth: 2024-03-08 00:13:27.799342596 +0000
	I0308 00:34:23.409065    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 00:34:23.418273    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.428091    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 00:34:23.432951    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.446895    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 00:34:23.455308    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.464761    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 00:34:23.474341    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.485334    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 00:34:23.493471    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.505480    8176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 00:34:23.509885    8176 command_runner.go:130] > Certificate will not expire
	I0308 00:34:23.514245    8176 kubeadm.go:391] StartCluster: {Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.61.226 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:34:23.523376    8176 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 00:34:23.554863    8176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0308 00:34:23.564221    8176 command_runner.go:130] > /var/lib/minikube/etcd:
	I0308 00:34:23.564221    8176 command_runner.go:130] > member
	W0308 00:34:23.571303    8176 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 00:34:23.571339    8176 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 00:34:23.571339    8176 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 00:34:23.582452    8176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 00:34:23.598944    8176 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:34:23.599884    8176 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-397400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:23.600631    8176 kubeconfig.go:62] C:\Users\jenkins.minikube7\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-397400" cluster setting kubeconfig missing "multinode-397400" context setting]
	I0308 00:34:23.601507    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:23.614908    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:23.615514    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400/client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:34:23.616221    8176 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 00:34:23.620485    8176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 00:34:23.640039    8176 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0308 00:34:23.640127    8176 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0308 00:34:23.640127    8176 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0308 00:34:23.640127    8176 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0308 00:34:23.640127    8176 command_runner.go:130] >  kind: InitConfiguration
	I0308 00:34:23.640161    8176 command_runner.go:130] >  localAPIEndpoint:
	I0308 00:34:23.640161    8176 command_runner.go:130] > -  advertiseAddress: 172.20.48.212
	I0308 00:34:23.640161    8176 command_runner.go:130] > +  advertiseAddress: 172.20.61.151
	I0308 00:34:23.640161    8176 command_runner.go:130] >    bindPort: 8443
	I0308 00:34:23.640214    8176 command_runner.go:130] >  bootstrapTokens:
	I0308 00:34:23.640214    8176 command_runner.go:130] >    - groups:
	I0308 00:34:23.640214    8176 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0308 00:34:23.640253    8176 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0308 00:34:23.640253    8176 command_runner.go:130] >    name: "multinode-397400"
	I0308 00:34:23.640291    8176 command_runner.go:130] >    kubeletExtraArgs:
	I0308 00:34:23.640291    8176 command_runner.go:130] > -    node-ip: 172.20.48.212
	I0308 00:34:23.640318    8176 command_runner.go:130] > +    node-ip: 172.20.61.151
	I0308 00:34:23.640318    8176 command_runner.go:130] >    taints: []
	I0308 00:34:23.640318    8176 command_runner.go:130] >  ---
	I0308 00:34:23.640352    8176 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0308 00:34:23.640352    8176 command_runner.go:130] >  kind: ClusterConfiguration
	I0308 00:34:23.640391    8176 command_runner.go:130] >  apiServer:
	I0308 00:34:23.640458    8176 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.20.48.212"]
	I0308 00:34:23.640518    8176 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	I0308 00:34:23.640540    8176 command_runner.go:130] >    extraArgs:
	I0308 00:34:23.640540    8176 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0308 00:34:23.640540    8176 command_runner.go:130] >  controllerManager:
	I0308 00:34:23.640662    8176 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.48.212
	+  advertiseAddress: 172.20.61.151
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-397400"
	   kubeletExtraArgs:
	-    node-ip: 172.20.48.212
	+    node-ip: 172.20.61.151
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.48.212"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.61.151"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0308 00:34:23.640712    8176 kubeadm.go:1153] stopping kube-system containers ...
	I0308 00:34:23.648657    8176 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 00:34:23.671840    8176 command_runner.go:130] > b8903699a2e3
	I0308 00:34:23.671840    8176 command_runner.go:130] > 84e1da671abd
	I0308 00:34:23.671840    8176 command_runner.go:130] > 13e6ea5ce4bd
	I0308 00:34:23.671840    8176 command_runner.go:130] > fdffd4f1db96
	I0308 00:34:23.671840    8176 command_runner.go:130] > 91ada1ebb521
	I0308 00:34:23.671840    8176 command_runner.go:130] > 79433b5ca644
	I0308 00:34:23.671840    8176 command_runner.go:130] > 9c957cee5d35
	I0308 00:34:23.671840    8176 command_runner.go:130] > 90ba9a9d99a3
	I0308 00:34:23.671840    8176 command_runner.go:130] > 0aaf57b801fb
	I0308 00:34:23.672952    8176 command_runner.go:130] > 4f8851b13458
	I0308 00:34:23.672952    8176 command_runner.go:130] > 23ccdb1fc3b5
	I0308 00:34:23.672952    8176 command_runner.go:130] > c0241fd304ad
	I0308 00:34:23.672952    8176 command_runner.go:130] > d4b57713d431
	I0308 00:34:23.672952    8176 command_runner.go:130] > ead2ed31c6b3
	I0308 00:34:23.672952    8176 command_runner.go:130] > 6b6ed8345b8f
	I0308 00:34:23.672952    8176 command_runner.go:130] > 45fec6e97f7a
	I0308 00:34:23.673034    8176 docker.go:483] Stopping containers: [b8903699a2e3 84e1da671abd 13e6ea5ce4bd fdffd4f1db96 91ada1ebb521 79433b5ca644 9c957cee5d35 90ba9a9d99a3 0aaf57b801fb 4f8851b13458 23ccdb1fc3b5 c0241fd304ad d4b57713d431 ead2ed31c6b3 6b6ed8345b8f 45fec6e97f7a]
	I0308 00:34:23.681325    8176 ssh_runner.go:195] Run: docker stop b8903699a2e3 84e1da671abd 13e6ea5ce4bd fdffd4f1db96 91ada1ebb521 79433b5ca644 9c957cee5d35 90ba9a9d99a3 0aaf57b801fb 4f8851b13458 23ccdb1fc3b5 c0241fd304ad d4b57713d431 ead2ed31c6b3 6b6ed8345b8f 45fec6e97f7a
	I0308 00:34:23.702772    8176 command_runner.go:130] > b8903699a2e3
	I0308 00:34:23.702772    8176 command_runner.go:130] > 84e1da671abd
	I0308 00:34:23.702772    8176 command_runner.go:130] > 13e6ea5ce4bd
	I0308 00:34:23.702772    8176 command_runner.go:130] > fdffd4f1db96
	I0308 00:34:23.702772    8176 command_runner.go:130] > 91ada1ebb521
	I0308 00:34:23.702772    8176 command_runner.go:130] > 79433b5ca644
	I0308 00:34:23.703691    8176 command_runner.go:130] > 9c957cee5d35
	I0308 00:34:23.703691    8176 command_runner.go:130] > 90ba9a9d99a3
	I0308 00:34:23.703691    8176 command_runner.go:130] > 0aaf57b801fb
	I0308 00:34:23.703691    8176 command_runner.go:130] > 4f8851b13458
	I0308 00:34:23.703738    8176 command_runner.go:130] > 23ccdb1fc3b5
	I0308 00:34:23.703738    8176 command_runner.go:130] > c0241fd304ad
	I0308 00:34:23.703738    8176 command_runner.go:130] > d4b57713d431
	I0308 00:34:23.703766    8176 command_runner.go:130] > ead2ed31c6b3
	I0308 00:34:23.703766    8176 command_runner.go:130] > 6b6ed8345b8f
	I0308 00:34:23.703766    8176 command_runner.go:130] > 45fec6e97f7a
	I0308 00:34:23.713740    8176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 00:34:23.746320    8176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0308 00:34:23.757805    8176 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:34:23.762788    8176 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 00:34:23.762788    8176 kubeadm.go:156] found existing configuration files:
	
	I0308 00:34:23.772541    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 00:34:23.791099    8176 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:34:23.791655    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 00:34:23.804267    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 00:34:23.832008    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 00:34:23.834172    8176 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:34:23.846253    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 00:34:23.857107    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 00:34:23.883304    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 00:34:23.885109    8176 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:34:23.897616    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 00:34:23.909113    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 00:34:23.933201    8176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 00:34:23.947101    8176 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:34:23.948205    8176 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 00:34:23.957739    8176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 00:34:23.984281    8176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 00:34:23.991048    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:24.374711    8176 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0308 00:34:24.374792    8176 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0308 00:34:24.374864    8176 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0308 00:34:24.374924    8176 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0308 00:34:24.374985    8176 command_runner.go:130] > [certs] Using the existing "sa" key
	I0308 00:34:24.374985    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 00:34:25.667520    8176 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 00:34:25.667520    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2924123s)
	I0308 00:34:25.667520    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:25.931130    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:34:25.931203    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:34:25.931203    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:34:25.931203    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:26.011234    8176 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 00:34:26.011315    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 00:34:26.011340    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 00:34:26.011340    8176 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 00:34:26.011340    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:26.093351    8176 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 00:34:26.093351    8176 api_server.go:52] waiting for apiserver process to appear ...
	I0308 00:34:26.108415    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:26.608062    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:27.113366    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:27.617247    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:28.124625    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:28.147840    8176 command_runner.go:130] > 1978
	I0308 00:34:28.147963    8176 api_server.go:72] duration metric: took 2.0544693s to wait for apiserver process to appear ...
	I0308 00:34:28.147963    8176 api_server.go:88] waiting for apiserver healthz status ...
	I0308 00:34:28.148046    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.362412    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 00:34:31.363189    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 00:34:31.363189    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.376701    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 00:34:31.376701    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 00:34:31.651716    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:31.659605    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:31.659695    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:32.162623    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:32.171509    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:32.173978    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:32.662837    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:32.673538    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 00:34:32.673538    8176 api_server.go:103] status: https://172.20.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 00:34:33.150866    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:33.157951    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:34:33.159228    8176 round_trippers.go:463] GET https://172.20.61.151:8443/version
	I0308 00:34:33.159228    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:33.159900    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:33.160159    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:33.172576    8176 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0308 00:34:33.172576    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:33.172576    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Content-Length: 264
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:33 GMT
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Audit-Id: 60fc7eeb-b43b-4f01-bfbc-cea30b7a483f
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:33.172576    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:33.172576    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:33.172576    8176 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0308 00:34:33.173105    8176 api_server.go:141] control plane version: v1.28.4
	I0308 00:34:33.173145    8176 api_server.go:131] duration metric: took 5.0250797s to wait for apiserver health ...
	I0308 00:34:33.173145    8176 cni.go:84] Creating CNI manager for ""
	I0308 00:34:33.173145    8176 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0308 00:34:33.176778    8176 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 00:34:33.187469    8176 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 00:34:33.199410    8176 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0308 00:34:33.199410    8176 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0308 00:34:33.199574    8176 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0308 00:34:33.199574    8176 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0308 00:34:33.199574    8176 command_runner.go:130] > Access: 2024-03-08 00:33:11.768939300 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] > Change: 2024-03-08 00:33:04.561000000 +0000
	I0308 00:34:33.199574    8176 command_runner.go:130] >  Birth: -
	I0308 00:34:33.199697    8176 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 00:34:33.199817    8176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 00:34:33.274756    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 00:34:34.710226    8176 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0308 00:34:34.710226    8176 command_runner.go:130] > daemonset.apps/kindnet configured
	I0308 00:34:34.710226    8176 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4354563s)
	I0308 00:34:34.710378    8176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 00:34:34.710537    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:34.710537    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:34.710537    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:34.710537    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:34.716013    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:34.716013    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:34.716013    8176 round_trippers.go:580]     Audit-Id: 52247c24-8834-4cf4-b37c-8c0ce7c91443
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:34.716132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:34.716132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:34.716132    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:34 GMT
	I0308 00:34:34.717885    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1675"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0308 00:34:34.723925    8176 system_pods.go:59] 12 kube-system pods found
	I0308 00:34:34.723925    8176 system_pods.go:61] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 00:34:34.723925    8176 system_pods.go:61] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 00:34:34.723925    8176 system_pods.go:61] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:34.723925    8176 system_pods.go:61] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:34.724514    8176 system_pods.go:61] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 00:34:34.724514    8176 system_pods.go:61] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:34.724514    8176 system_pods.go:74] duration metric: took 14.1356ms to wait for pod list to return data ...
	I0308 00:34:34.724674    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:34:34.724745    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:34.724745    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:34.724822    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:34.724822    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:34.729633    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:34.729633    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Audit-Id: acc66f97-d700-4597-b9a2-56dd30e8cf5f
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:34.729633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:34.729633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:34.729633    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:34 GMT
	I0308 00:34:34.729633    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1675"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15627 chars]
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:34.731740    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:34.731740    8176 node_conditions.go:105] duration metric: took 7.0654ms to run NodePressure ...
	I0308 00:34:34.731740    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 00:34:34.937543    8176 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0308 00:34:35.027195    8176 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0308 00:34:35.033852    8176 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 00:34:35.033852    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0308 00:34:35.033852    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.033852    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.033852    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.035047    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.040355    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.040355    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.040355    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.040355    8176 round_trippers.go:580]     Audit-Id: cd3c4c1d-17f8-421d-81e6-9e92807958bc
	I0308 00:34:35.040441    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.041576    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1677"},"items":[{"metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0308 00:34:35.043330    8176 kubeadm.go:733] kubelet initialised
	I0308 00:34:35.043879    8176 kubeadm.go:734] duration metric: took 10.0269ms waiting for restarted kubelet to initialise ...
	I0308 00:34:35.043879    8176 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:35.043963    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:35.043963    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.043963    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.043963    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.044672    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.044672    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Audit-Id: 719d8539-a467-474c-ae8c-25d50be24139
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.044672    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.044672    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.044672    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.051863    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1677"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83581 chars]
	I0308 00:34:35.055426    8176 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.055604    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:35.055665    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.055665    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.055724    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.056365    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.058439    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Audit-Id: c38158d2-38a1-433f-9fa4-a53016d9da4c
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.058439    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.058439    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.058439    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.058663    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:35.059199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.059398    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.059398    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.059398    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.061090    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.063838    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.063838    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.063838    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Audit-Id: a6744353-cedb-40e9-84aa-d68fa601f24f
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.063838    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.064459    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.064989    8176 pod_ready.go:97] node "multinode-397400" hosting pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.065061    8176 pod_ready.go:81] duration metric: took 9.6351ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.065061    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.065061    8176 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.065208    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:35.065266    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.065302    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.065302    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.066657    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.068533    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.068533    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.068533    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.068617    8176 round_trippers.go:580]     Audit-Id: c370e46e-a467-4f89-a1d3-c8d6f1e86730
	I0308 00:34:35.068651    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.068651    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.068651    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.068651    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:35.069262    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.069299    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.069333    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.069333    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.070786    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.070786    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Audit-Id: f8607540-177a-4139-8f3f-d2c38fad033a
	I0308 00:34:35.070786    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.073122    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.073122    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.073122    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.073348    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.073348    8176 pod_ready.go:97] node "multinode-397400" hosting pod "etcd-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.073348    8176 pod_ready.go:81] duration metric: took 8.2863ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.073348    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "etcd-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.073959    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.074121    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:34:35.074121    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.074121    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.074121    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.074820    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.077342    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.077342    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Audit-Id: 5415cd52-4dfa-414e-9e2b-d56f89784c33
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.077342    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.077342    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.077342    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1666","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0308 00:34:35.078351    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.078427    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.078427    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.078427    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.081722    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:35.081795    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.081795    8176 round_trippers.go:580]     Audit-Id: 3cb4eb36-6a7a-4d09-9c32-fc599bad85f1
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.081824    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.081824    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.081824    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.081824    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.082568    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-apiserver-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.082568    8176 pod_ready.go:81] duration metric: took 8.6094ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.082568    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-apiserver-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.082777    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.082857    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:35.082914    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.082914    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.082914    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.083241    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.083241    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Audit-Id: 3e3395b1-8a68-497e-9674-80ac6e22600b
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.083241    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.083241    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.083241    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.086303    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1663","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0308 00:34:35.123199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:35.123199    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.123199    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.123199    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.123771    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.126479    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.126479    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.126479    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.126479    8176 round_trippers.go:580]     Audit-Id: 010a1783-976e-43b1-90c5-f417f8372e44
	I0308 00:34:35.126871    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:35.127072    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-controller-manager-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.127072    8176 pod_ready.go:81] duration metric: took 44.2943ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.127072    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-controller-manager-multinode-397400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:35.127072    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.313888    8176 request.go:629] Waited for 186.5688ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:35.314252    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:35.314252    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.314252    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.314252    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.320191    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:35.320191    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Audit-Id: 4120baf2-01d1-45b6-8822-9924e9fa4d3f
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.320191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.320191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.320191    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.320753    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"610","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0308 00:34:35.514517    8176 request.go:629] Waited for 192.9884ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:35.514707    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:35.514773    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.514773    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.514773    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.515517    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.518515    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Audit-Id: 59ee09d1-e9f4-43e9-bfbc-ddef6e505913
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.518515    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.518515    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.518515    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.518847    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"1341","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0308 00:34:35.519085    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:35.519085    8176 pod_ready.go:81] duration metric: took 392.0095ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.519085    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:35.719894    8176 request.go:629] Waited for 200.6268ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:35.720119    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:35.720119    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.720119    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.720119    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.720452    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:35.723598    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.723598    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Audit-Id: 5c96f248-9e15-42cc-9cd8-bad90a5434a6
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.723598    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.723598    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.724064    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:34:35.914237    8176 request.go:629] Waited for 189.5417ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:35.914410    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:35.914488    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:35.914488    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:35.914488    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:35.916223    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:35.918357    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:35.918397    8176 round_trippers.go:580]     Audit-Id: cb964b14-5978-4fa9-ab7a-95c79cb1fb8e
	I0308 00:34:35.918397    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:35.918423    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:35.918423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:35.918423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:35.918423    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:35 GMT
	I0308 00:34:35.918423    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1638","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:34:35.919170    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:35.919254    8176 pod_ready.go:81] duration metric: took 400.1655ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:35.919276    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:35.919276    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:36.111506    8176 request.go:629] Waited for 192.0592ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:36.111600    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:36.111600    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.111600    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.111600    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.112018    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.112018    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Audit-Id: 4032cbba-7e6b-406c-9472-b2e285bf591c
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.112018    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.112018    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.112018    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.115363    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:34:36.333175    8176 request.go:629] Waited for 217.0681ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.333474    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.333474    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.333474    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.333474    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.333897    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.333897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.333897    8176 round_trippers.go:580]     Audit-Id: f6ae0c0a-2574-41fc-b050-b1ddda1ef2fa
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.337423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.337423    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.337423    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.337846    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1651","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0308 00:34:36.337892    8176 pod_ready.go:97] node "multinode-397400" hosting pod "kube-proxy-nt8td" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:36.337892    8176 pod_ready.go:81] duration metric: took 418.6121ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:36.337892    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400" hosting pod "kube-proxy-nt8td" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400" has status "Ready":"False"
	I0308 00:34:36.337892    8176 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:36.518830    8176 request.go:629] Waited for 180.121ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.518996    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.518996    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.518996    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.519313    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.526206    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:36.526256    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Audit-Id: 7f1f98e1-44e3-4521-a98f-dfd96f558fa0
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.526256    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.526317    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.526317    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.526317    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.527136    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:36.712513    8176 request.go:629] Waited for 184.4755ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.712662    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:36.712662    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.712662    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.712662    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.727917    8176 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0308 00:34:36.727917    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.727917    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.727917    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Audit-Id: da952290-7b8b-4f73-bfb0-16265f768b76
	I0308 00:34:36.727917    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.727917    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:36.918218    8176 request.go:629] Waited for 78.5538ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.918218    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:36.918337    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:36.918337    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:36.918337    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:36.918508    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:36.921739    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:36.921739    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:36 GMT
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Audit-Id: 448109be-1fb7-460e-a9e9-844fb9065fac
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:36.921739    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:36.921739    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:36.922257    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.111389    8176 request.go:629] Waited for 188.1919ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.111389    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.111389    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.111389    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.111389    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.117131    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:37.117131    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Audit-Id: e466edb8-ea88-4faf-8b6b-47cd8ac0a254
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.117131    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.117131    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.117131    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.117131    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:37.352876    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:37.352876    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.352876    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.352876    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.353408    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.353408    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.353408    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.357253    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.357253    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.357253    8176 round_trippers.go:580]     Audit-Id: 51ea9908-3ab9-40fb-ac6a-0ec37b8a19c8
	I0308 00:34:37.357343    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.514137    8176 request.go:629] Waited for 155.8959ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.514137    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.514137    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.514137    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.514137    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.514564    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.514564    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.514564    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.514564    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Audit-Id: a9927ced-c55b-48f0-8490-180fa2ae4476
	I0308 00:34:37.514564    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.517748    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:37.847981    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:37.847981    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.847981    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.847981    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.853897    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:37.853897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.853897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.853897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Audit-Id: 39bdac16-ea6a-4cb1-87ac-a5351f1a1541
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.853897    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.854636    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:37.914886    8176 request.go:629] Waited for 60.098ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.915267    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:37.915267    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:37.915372    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:37.915372    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:37.916096    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:37.916096    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:37 GMT
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Audit-Id: a2a24143-a6fb-4b0d-9440-a9d644397789
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:37.916096    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:37.918654    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:37.918654    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:37.918730    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:38.344199    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:38.344199    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.344199    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.344199    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.344761    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:38.344761    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Audit-Id: ba7a8944-e158-4a12-9fbe-8e159da83b77
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.344761    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.344761    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.344761    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.352976    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:38.353651    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:38.353717    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.353717    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.353717    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.360311    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:38.360311    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.360351    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.360351    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Audit-Id: 468fc0ca-462f-41f2-a05b-b308cee31053
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.360379    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.360379    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.360379    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:38.361078    8176 pod_ready.go:102] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:38.849927    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:38.850005    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.850005    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.850005    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.855160    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:34:38.855222    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Audit-Id: 618361a3-b244-48c9-b888-1a94fd5ddfa4
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.855222    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.855222    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.855222    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.855222    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1664","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0308 00:34:38.856066    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:38.856116    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:38.856116    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:38.856116    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:38.856723    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:38.858957    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Audit-Id: 587ee92b-d83e-40c4-b69c-907582239c4c
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:38.858957    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:38.858957    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:38.858957    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:38 GMT
	I0308 00:34:38.859217    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.353616    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:39.353706    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.353706    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.353706    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.353972    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.357037    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.357037    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.357037    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Audit-Id: 5c99b4e3-b39a-4af4-ad06-f4461e4d9227
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.357037    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.357891    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:34:39.358400    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.358400    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.358400    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.358400    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.359027    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.362125    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.362125    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.362125    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Audit-Id: 6e05dbdb-9c6a-4950-b588-24bf8b9fd32d
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.362125    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.362290    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.362290    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:39.362290    8176 pod_ready.go:81] duration metric: took 3.0243694s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:39.362290    8176 pod_ready.go:38] duration metric: took 4.3182857s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:39.362290    8176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 00:34:39.378659    8176 command_runner.go:130] > -16
	I0308 00:34:39.379263    8176 ops.go:34] apiserver oom_adj: -16
	I0308 00:34:39.379263    8176 kubeadm.go:591] duration metric: took 15.807746s to restartPrimaryControlPlane
	I0308 00:34:39.379263    8176 kubeadm.go:393] duration metric: took 15.8648694s to StartCluster
	I0308 00:34:39.379263    8176 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:39.379263    8176 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:34:39.381130    8176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:34:39.382561    8176 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 00:34:39.382628    8176 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 00:34:39.386753    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:34:39.382628    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:39.394131    8176 out.go:177] * Enabled addons: 
	I0308 00:34:39.395079    8176 addons.go:505] duration metric: took 12.5177ms for enable addons: enabled=[]
	I0308 00:34:39.399438    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:34:39.637408    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:34:39.686940    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400" to be "Ready" ...
	I0308 00:34:39.687223    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.687281    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.687281    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.687281    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.687499    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.687499    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.687499    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Audit-Id: e2ea79eb-800b-4fb3-ba19-3f420a546a7b
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.687499    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.687499    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.692451    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:39.692974    8176 node_ready.go:49] node "multinode-397400" has status "Ready":"True"
	I0308 00:34:39.693086    8176 node_ready.go:38] duration metric: took 6.017ms for node "multinode-397400" to be "Ready" ...
	I0308 00:34:39.693086    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:39.693277    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:39.693277    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.693277    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.693277    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.694091    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:39.694091    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Audit-Id: 227f3c4e-7a57-4b1f-b2a9-8fcce01a6aba
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.699026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.699026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.699026    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.700298    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1744"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83337 chars]
	I0308 00:34:39.704815    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:39.715490    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:39.715490    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.715550    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.715550    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.718820    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:39.718884    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Audit-Id: 4e734e47-605c-41f5-942b-5c0e05460d64
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.718884    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.718884    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.718944    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.718944    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.719172    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:39.916105    8176 request.go:629] Waited for 195.9047ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.916201    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:39.916201    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:39.916409    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:39.916409    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:39.919897    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:39.919897    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:39.919897    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:39.919897    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:39 GMT
	I0308 00:34:39.919897    8176 round_trippers.go:580]     Audit-Id: 3beb2fc9-faf0-4231-b416-f8bca6263cbb
	I0308 00:34:39.920053    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:39.920053    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:39.920053    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:39.920298    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:40.220313    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:40.220313    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.220313    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.220313    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.224709    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:40.224709    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.224709    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Audit-Id: 6cf4fb91-3dee-4335-abd5-25dde902a7d3
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.224709    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.224709    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.224936    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:40.320446    8176 request.go:629] Waited for 94.7631ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.320446    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.320446    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.320446    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.320446    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.325733    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:40.325794    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.325794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Audit-Id: 26e81ea0-f7f6-47ec-a6fe-00363ee6cbaf
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.325849    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.325849    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.326149    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:40.707650    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:40.707650    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.707650    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.707650    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.708228    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:40.708228    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.708228    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.708228    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.708228    8176 round_trippers.go:580]     Audit-Id: e83ccf82-c3a8-4560-a791-c8ca0d8d93e2
	I0308 00:34:40.712313    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:40.713020    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:40.713020    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:40.713020    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:40.713020    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:40.715476    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:40.715476    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:40 GMT
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Audit-Id: dea190ad-b2f8-4bbd-a526-f3eed05ea914
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:40.715476    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:40.715476    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:40.715476    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:40.715476    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.217553    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:41.217553    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.217553    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.217553    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.219029    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:41.219029    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Audit-Id: b8d56965-8872-4617-b8b2-d2b9a5f644f6
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.219029    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.219029    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.219029    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.222249    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:41.222914    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:41.222914    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.222914    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.222914    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.223243    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:41.223243    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.223243    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.223243    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Audit-Id: 056a8edd-9502-40ac-a64c-5fbe66d3da11
	I0308 00:34:41.223243    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.225784    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.705756    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:41.705756    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.705756    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.705756    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.706220    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:41.706220    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Audit-Id: be058aff-c34f-44da-add2-7a541e8f6955
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.706220    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.706220    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.706220    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.711557    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1668","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0308 00:34:41.711745    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:41.711745    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:41.711745    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:41.711745    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:41.712948    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:41.712948    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Audit-Id: 39ce9840-2f44-48e5-85a7-59feae5f8ada
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:41.712948    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:41.712948    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:41.712948    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:41 GMT
	I0308 00:34:41.712948    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:41.712948    8176 pod_ready.go:102] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:42.210295    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:34:42.210295    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.210295    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.210295    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.210730    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.210730    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Audit-Id: 781bbd5e-1555-4b82-90fa-57ecb1be960a
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.210730    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.210730    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.210730    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.214867    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:34:42.215713    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.215713    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.215713    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.215713    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.216526    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.216526    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Audit-Id: efa9d218-9c34-4322-8ad2-fe67350d1b02
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.216526    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.216526    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.216526    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.219346    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:42.219346    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:42.219346    8176 pod_ready.go:81] duration metric: took 2.5145072s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:42.219346    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:42.219346    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:42.219346    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.220407    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.220407    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.220690    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.220690    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Audit-Id: 2ce5f548-b49c-47e7-a2e1-38e281ac42ee
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.220690    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.228380    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.228380    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.228380    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.228380    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:42.313617    8176 request.go:629] Waited for 84.5496ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.313890    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.313890    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.313890    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.313890    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.314092    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.314092    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Audit-Id: 15443abf-8b76-4759-bbd5-efffbb4b4523
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.314092    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.314092    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.314092    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.316793    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:42.732005    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:42.732005    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.732005    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.732005    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.732536    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.732536    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.732536    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.732536    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Audit-Id: fe85a65c-0311-4144-ae08-4c5453dc32fc
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.732536    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.736962    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:42.737175    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:42.737175    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:42.737175    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:42.737175    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:42.737981    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:42.737981    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:42.737981    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:42.737981    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:42 GMT
	I0308 00:34:42.737981    8176 round_trippers.go:580]     Audit-Id: 0d94eec8-174c-4df5-bb76-ee429c1fc277
	I0308 00:34:42.740968    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:43.223396    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:43.223396    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.223396    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.223396    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.223858    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.223858    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Audit-Id: 4934fa24-3628-4971-b95e-6a0647baf02c
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.223858    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.223858    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.223858    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.228958    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:43.229167    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:43.229167    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.229167    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.229167    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.235852    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:34:43.235906    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.235947    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.235947    8176 round_trippers.go:580]     Audit-Id: bdb77c69-6775-459e-a0c4-ab3c80c4b1d6
	I0308 00:34:43.235983    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.235983    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.235983    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.235983    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.236175    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:43.724719    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:43.724791    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.724791    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.724791    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.725035    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.725035    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Audit-Id: 77a50dfd-074c-4b8d-bc94-0e52ded0b5a9
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.725035    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.725035    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.725035    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.728560    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:43.728686    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:43.728686    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:43.728686    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:43.728686    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:43.729399    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:43.729399    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:43.729399    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:43 GMT
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Audit-Id: 783dd207-8d68-45c0-a0a3-63a6971e504c
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:43.729399    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:43.729399    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:43.732113    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:44.226706    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:44.226706    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.226706    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.226706    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.227437    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.227437    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.227437    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.227437    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.227437    8176 round_trippers.go:580]     Audit-Id: 3b51903a-e18c-4506-9842-29d5e1d9c308
	I0308 00:34:44.230717    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:44.231012    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:44.231012    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.231012    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.231012    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.233265    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:44.233265    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Audit-Id: 71028b31-989c-41c9-9bbf-744e5e5c8316
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.233265    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.234744    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.234744    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.234744    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.234826    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:44.235669    8176 pod_ready.go:102] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"False"
	I0308 00:34:44.724632    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:44.724632    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.724632    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.724725    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.725511    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.727786    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.727786    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.727786    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.727786    8176 round_trippers.go:580]     Audit-Id: 06c70828-4b42-47e3-af64-f493e1f6506e
	I0308 00:34:44.728622    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:44.728757    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:44.728757    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:44.728757    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:44.728757    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:44.729541    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:44.731973    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:44.731973    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:44.731973    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:44 GMT
	I0308 00:34:44.731973    8176 round_trippers.go:580]     Audit-Id: 6ed3e935-43e7-4830-8b03-3cee016fdf6e
	I0308 00:34:44.732064    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:44.732370    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:45.231292    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:45.231292    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.231399    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.231399    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.231741    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.231741    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.231741    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.235862    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.235862    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Audit-Id: 6b537cfc-2d08-4b0e-9917-2031c46a0d65
	I0308 00:34:45.235862    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.236043    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:45.236769    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:45.236769    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.236834    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.236834    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.240943    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:34:45.240943    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Audit-Id: abdd7376-b12c-4076-89fa-4de1811be3e8
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.240943    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.240943    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.240943    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.241564    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:45.723181    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:45.723181    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.723181    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.723181    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.723933    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.723933    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.727254    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.727254    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Audit-Id: 899a3a0b-2fb7-4890-bc5b-b5ffb9ed36ce
	I0308 00:34:45.727254    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.727394    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1665","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0308 00:34:45.728011    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:45.728100    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:45.728100    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:45.728100    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:45.728298    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:45.728298    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Audit-Id: 6c1ed798-985b-4653-8b9b-29d53aecaedc
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:45.731032    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:45.731032    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:45.731032    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:45 GMT
	I0308 00:34:45.731499    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.229582    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:34:46.229651    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.229651    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.229651    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.230994    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:34:46.233258    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Audit-Id: fa5dd2c0-2539-456a-856c-37f4f891961c
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.233258    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.233258    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.233258    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.233453    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:34:46.233983    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.233983    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.233983    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.233983    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.234811    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.234811    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.234811    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.234811    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Audit-Id: 9d7696f8-51e0-4b0d-bb11-4496192e2ff0
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.237784    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.237784    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.238161    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.238294    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.238294    8176 pod_ready.go:81] duration metric: took 4.0189104s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.238294    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.238294    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:34:46.238294    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.238294    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.238294    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.240779    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:34:46.240779    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.240779    8176 round_trippers.go:580]     Audit-Id: 2f8cc8d2-5f99-444b-9a70-8a1ac16f9a10
	I0308 00:34:46.240779    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.242750    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.242750    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.242750    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.242750    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.243057    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:34:46.243592    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.243592    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.243592    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.243592    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.244143    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.244143    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.244143    8176 round_trippers.go:580]     Audit-Id: 86218f0a-24f9-4c53-9fab-5b9d74d256c6
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.247197    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.247197    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.247197    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.247369    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.248176    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.248176    8176 pod_ready.go:81] duration metric: took 9.8815ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.248176    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.248352    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:46.248352    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.248352    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.248352    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.248870    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.248870    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Audit-Id: 1197856d-66bc-471d-ab2d-880c57b1071d
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.251300    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.251300    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.251300    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.251720    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1663","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0308 00:34:46.313024    8176 request.go:629] Waited for 60.7503ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.313238    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.313296    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.313296    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.313296    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.314094    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.316005    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.316005    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.316005    8176 round_trippers.go:580]     Audit-Id: 136f54a4-e9db-4a5f-946e-2b308a98706e
	I0308 00:34:46.316068    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.316068    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.316068    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.316068    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.316335    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.762798    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:34:46.762897    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.762897    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.762897    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.763161    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.763161    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.766285    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.766285    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.766285    8176 round_trippers.go:580]     Audit-Id: 4e09f8b6-8329-49d0-ad59-22e9a4fbc912
	I0308 00:34:46.766726    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:34:46.767444    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:46.767444    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.767444    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.767444    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.768331    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.768331    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Audit-Id: 650df614-1940-4eba-a242-5c90d8b979bd
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.768331    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.768331    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.768331    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.771114    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:46.771332    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:46.771332    8176 pod_ready.go:81] duration metric: took 523.1512ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.771332    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:46.912918    8176 request.go:629] Waited for 141.4168ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:46.913128    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:34:46.913212    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:46.913212    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:46.916301    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:46.916562    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:46.916562    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:46.916562    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:46.916562    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:46 GMT
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Audit-Id: adfa35f1-e41b-40c0-a500-5d7c7bb423be
	I0308 00:34:46.916562    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:46.919370    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"610","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0308 00:34:47.123568    8176 request.go:629] Waited for 203.4905ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:47.123568    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:34:47.123568    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.123568    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.123568    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.124411    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.124411    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Audit-Id: f268e230-8c85-428d-a852-85086f64ffdd
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.124411    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.124411    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.127418    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.127418    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.127721    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d","resourceVersion":"1341","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_16_35_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0308 00:34:47.127721    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:47.128276    8176 pod_ready.go:81] duration metric: took 356.3855ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.128276    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.320046    8176 request.go:629] Waited for 191.4774ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:47.320437    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:34:47.320437    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.320509    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.320509    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.321191    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.321191    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.321191    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.324026    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Audit-Id: 7b244d25-03da-4b60-8dac-7d0dc1df73f7
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.324026    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.324248    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:34:47.513266    8176 request.go:629] Waited for 189.016ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:47.513449    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:34:47.513590    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.513590    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.513590    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.514314    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.514314    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.514314    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.514314    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.514314    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.514314    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.517380    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.517380    8176 round_trippers.go:580]     Audit-Id: fae6c14b-7c29-4c30-bd28-79989b5d6cea
	I0308 00:34:47.517556    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1765","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:34:47.517626    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:47.517626    8176 pod_ready.go:81] duration metric: took 389.3468ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:34:47.517626    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:34:47.518180    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.717836    8176 request.go:629] Waited for 199.5756ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:47.718045    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:34:47.718045    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.718045    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.718045    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.718409    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.718409    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.718409    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.718409    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.718409    8176 round_trippers.go:580]     Audit-Id: 599f684a-9792-4ce0-9605-d163cfc4d4cd
	I0308 00:34:47.721673    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:34:47.917915    8176 request.go:629] Waited for 195.1974ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:47.917915    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:47.918109    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:47.918109    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:47.918109    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:47.918466    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:47.918466    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:47.918466    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:47.918466    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:47 GMT
	I0308 00:34:47.918466    8176 round_trippers.go:580]     Audit-Id: 6c4cb79c-0319-4eda-baac-edbbe3ec49dc
	I0308 00:34:47.921827    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:47.922357    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:47.922357    8176 pod_ready.go:81] duration metric: took 404.1731ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:47.922357    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:48.115035    8176 request.go:629] Waited for 192.4523ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:48.115470    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:34:48.115497    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.115497    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.115497    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.119099    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:48.119099    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.119099    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Audit-Id: 4e2a6673-f649-43ce-9108-49560b16ab40
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.119099    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.119099    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.119099    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:34:48.326335    8176 request.go:629] Waited for 205.9565ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:48.326582    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:34:48.326582    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.326582    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.326582    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.327260    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.330868    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Audit-Id: 9375966d-5a38-4ad5-8ac2-7b83d8db35b0
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.330868    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.330868    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.330868    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.331077    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:34:48.331200    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:34:48.331200    8176 pod_ready.go:81] duration metric: took 408.8397ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:34:48.331200    8176 pod_ready.go:38] duration metric: took 8.6380334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:34:48.331200    8176 api_server.go:52] waiting for apiserver process to appear ...
	I0308 00:34:48.340060    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:34:48.363877    8176 command_runner.go:130] > 1978
	I0308 00:34:48.363877    8176 api_server.go:72] duration metric: took 8.9811648s to wait for apiserver process to appear ...
	I0308 00:34:48.363991    8176 api_server.go:88] waiting for apiserver healthz status ...
	I0308 00:34:48.363991    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:34:48.369939    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:34:48.372421    8176 round_trippers.go:463] GET https://172.20.61.151:8443/version
	I0308 00:34:48.372470    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.372470    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.372497    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.375787    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:34:48.375787    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Content-Length: 264
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Audit-Id: deb4d218-80ac-49e7-874e-ff4126b2472c
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.375787    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.375787    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.375787    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.375787    8176 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0308 00:34:48.375787    8176 api_server.go:141] control plane version: v1.28.4
	I0308 00:34:48.375787    8176 api_server.go:131] duration metric: took 11.7956ms to wait for apiserver health ...
	I0308 00:34:48.375787    8176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 00:34:48.514599    8176 request.go:629] Waited for 138.6008ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.514679    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.514679    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.514679    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.514679    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.521619    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.521619    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Audit-Id: c8aa2d3f-e087-4ae3-9f84-747bdc0afce7
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.521619    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.521619    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.521619    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.523586    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0308 00:34:48.527648    8176 system_pods.go:59] 12 kube-system pods found
	I0308 00:34:48.527648    8176 system_pods.go:61] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:34:48.527648    8176 system_pods.go:61] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:34:48.527755    8176 system_pods.go:61] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:48.527755    8176 system_pods.go:74] duration metric: took 151.9674ms to wait for pod list to return data ...
	I0308 00:34:48.527755    8176 default_sa.go:34] waiting for default service account to be created ...
	I0308 00:34:48.716141    8176 request.go:629] Waited for 188.2111ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:34:48.716311    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/default/serviceaccounts
	I0308 00:34:48.716311    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.716311    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.716311    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.717199    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.717199    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.717199    8176 round_trippers.go:580]     Content-Length: 262
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Audit-Id: f89fabd5-b48c-4458-bfe2-86fee162cffc
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.719805    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.719805    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.719805    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.719805    8176 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"095cdd29-7997-44a2-8aa0-51adc17297b9","resourceVersion":"333","creationTimestamp":"2024-03-08T00:13:51Z"}}]}
	I0308 00:34:48.719873    8176 default_sa.go:45] found service account: "default"
	I0308 00:34:48.719873    8176 default_sa.go:55] duration metric: took 192.1162ms for default service account to be created ...
	I0308 00:34:48.719873    8176 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 00:34:48.911262    8176 request.go:629] Waited for 191.3867ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.911387    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:34:48.911649    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:48.911649    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:48.911649    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:48.919625    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:48.919625    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:48.919625    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:48.919625    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:48 GMT
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Audit-Id: 643f6ee0-a787-486a-8419-4c1fdb615dce
	I0308 00:34:48.919625    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:48.920689    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82555 chars]
	I0308 00:34:48.924042    8176 system_pods.go:86] 12 kube-system pods found
	I0308 00:34:48.924042    8176 system_pods.go:89] "coredns-5dd5756b68-w4hzh" [d164fdff-2fa7-412c-86e6-f0fa957e0361] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "etcd-multinode-397400" [afdc3d40-e2cf-4751-9d88-09ecca9f4b0a] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-jvzwq" [3897294d-bb97-4445-a540-40cedb960e67] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-srl7h" [e3e7e96a-d2bb-4a32-baae-52b0a30ce886] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kindnet-wkwtm" [0f4e9963-262a-4dd2-b907-da97715a6378] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kube-apiserver-multinode-397400" [1e615aff-4d66-4ded-b27a-16bc990c80a6] Running
	I0308 00:34:48.924042    8176 system_pods.go:89] "kube-controller-manager-multinode-397400" [33cdb29c-e857-4fc2-b950-4fdde032852f] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-gw9w9" [9b5de9a2-0643-466e-9a31-4349596c0417] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-ktnrd" [e76aaee4-f97d-4d55-b458-893eef62fb22] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-proxy-nt8td" [dafb9385-fe20-4849-bd58-31dcf82b4a58] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "kube-scheduler-multinode-397400" [3f029955-80be-4e3d-a157-faec2631b9b8] Running
	I0308 00:34:48.924611    8176 system_pods.go:89] "storage-provisioner" [81b55677-743c-4d2f-b04f-95928d4a3868] Running
	I0308 00:34:48.924611    8176 system_pods.go:126] duration metric: took 204.736ms to wait for k8s-apps to be running ...
	I0308 00:34:48.924611    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:34:48.934478    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:34:48.957476    8176 system_svc.go:56] duration metric: took 32.8645ms WaitForService to wait for kubelet
	I0308 00:34:48.957535    8176 kubeadm.go:576] duration metric: took 9.5748171s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:34:48.957535    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:34:49.116129    8176 request.go:629] Waited for 158.1951ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:49.116254    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:34:49.116254    8176 round_trippers.go:469] Request Headers:
	I0308 00:34:49.116254    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:34:49.116254    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:34:49.117044    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:34:49.117044    8176 round_trippers.go:577] Response Headers:
	I0308 00:34:49.117044    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:34:49 GMT
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Audit-Id: 17a1bfa4-24a7-4dd4-8376-005ff18d8454
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:34:49.121468    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:34:49.121468    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:34:49.121681    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1769"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15500 chars]
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:34:49.122917    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:34:49.122917    8176 node_conditions.go:105] duration metric: took 165.3182ms to run NodePressure ...
	I0308 00:34:49.122917    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:34:49.122917    8176 start.go:245] waiting for cluster config update ...
	I0308 00:34:49.122917    8176 start.go:254] writing updated cluster config ...
	I0308 00:34:49.126835    8176 out.go:177] 
	I0308 00:34:49.130331    8176 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:49.138155    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:34:49.138155    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:34:49.143344    8176 out.go:177] * Starting "multinode-397400-m02" worker node in "multinode-397400" cluster
	I0308 00:34:49.147063    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:34:49.147131    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:34:49.147535    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:34:49.147535    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:34:49.147535    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:34:49.149827    8176 start.go:360] acquireMachinesLock for multinode-397400-m02: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:34:49.149945    8176 start.go:364] duration metric: took 118µs to acquireMachinesLock for "multinode-397400-m02"
	I0308 00:34:49.149945    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:34:49.149945    8176 fix.go:54] fixHost starting: m02
	I0308 00:34:49.150553    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:34:50.983263    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:34:50.983322    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:50.983322    8176 fix.go:112] recreateIfNeeded on multinode-397400-m02: state=Stopped err=<nil>
	W0308 00:34:50.983322    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:34:50.987182    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400-m02" ...
	I0308 00:34:50.989569    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400-m02
	I0308 00:34:53.753164    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:34:53.753224    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:53.753279    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:34:53.753364    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:55.755741    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:34:57.966149    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:34:57.971110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:34:58.978025    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:00.944166    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:00.944449    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:00.944626    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:03.204005    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:03.214059    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:04.226300    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:06.216895    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:06.223560    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:06.223653    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:08.473806    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:08.473806    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:09.485466    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:11.456844    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:11.456976    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:11.456976    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:13.762002    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:35:13.762002    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:14.780556    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:16.730333    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:16.730621    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:16.730715    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:18.950810    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:18.950810    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:18.963497    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:20.865017    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:20.865017    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:20.874721    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:23.084481    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:23.094276    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:23.094745    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:35:23.097354    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:35:23.097446    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:24.986062    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:27.239255    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:27.245000    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:27.248730    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:27.250085    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:27.250085    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:35:27.377743    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:35:27.377743    8176 buildroot.go:166] provisioning hostname "multinode-397400-m02"
	I0308 00:35:27.377743    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:29.208520    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:31.454485    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:31.464380    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:31.469728    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:31.470271    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:31.470271    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400-m02 && echo "multinode-397400-m02" | sudo tee /etc/hostname
	I0308 00:35:31.619093    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400-m02
	
	I0308 00:35:31.619147    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:33.471869    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:35.652961    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:35.662869    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:35.668274    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:35.668724    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:35.668789    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:35:35.812652    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:35:35.812754    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:35:35.812829    8176 buildroot.go:174] setting up certificates
	I0308 00:35:35.812829    8176 provision.go:84] configureAuth start
	I0308 00:35:35.812893    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:37.660057    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:37.660308    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:37.660410    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:39.837022    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:39.837022    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:39.848074    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:41.699439    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:41.709461    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:41.709461    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:43.964833    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:43.975119    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:43.975119    8176 provision.go:143] copyHostCerts
	I0308 00:35:43.975258    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:35:43.975415    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:35:43.975415    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:35:43.975415    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:35:43.976642    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:35:43.976642    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:35:43.977207    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:35:43.977518    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:35:43.978308    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:35:43.978840    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:35:43.978840    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:35:43.979228    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:35:43.980121    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400-m02 san=[127.0.0.1 172.20.50.67 localhost minikube multinode-397400-m02]
	I0308 00:35:44.088419    8176 provision.go:177] copyRemoteCerts
	I0308 00:35:44.110694    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:35:44.110694    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:45.958690    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:45.958690    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:45.971572    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:48.202091    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:48.202091    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:48.212139    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:35:48.315275    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2045408s)
	I0308 00:35:48.315275    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:35:48.315894    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 00:35:48.357633    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:35:48.357633    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:35:48.397317    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:35:48.397705    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0308 00:35:48.437704    8176 provision.go:87] duration metric: took 12.6247209s to configureAuth
	I0308 00:35:48.437704    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:35:48.437704    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:35:48.438319    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:50.277108    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:50.277275    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:50.277275    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:52.510023    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:52.520722    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:52.525884    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:52.526589    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:52.526589    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:35:52.656647    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:35:52.656733    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:35:52.656816    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:35:52.656816    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:54.536025    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:54.536206    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:54.536261    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:35:56.732660    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:35:56.742203    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:56.747768    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:35:56.748322    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:35:56.748389    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.61.151"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:35:56.895715    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.61.151
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:35:56.895813    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:35:58.737184    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:35:58.737184    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:35:58.746540    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:00.958110    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:00.967751    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:00.975827    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:00.975827    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:00.975827    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:36:02.248533    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:36:02.248601    8176 machine.go:97] duration metric: took 39.1508368s to provisionDockerMachine
	I0308 00:36:02.248630    8176 start.go:293] postStartSetup for "multinode-397400-m02" (driver="hyperv")
	I0308 00:36:02.248655    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:36:02.260943    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:36:02.260943    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:04.109849    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:06.323006    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:06.323006    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:06.330589    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:06.440312    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1793292s)
	I0308 00:36:06.450955    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:36:06.453809    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:36:06.453809    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:36:06.453809    8176 command_runner.go:130] > ID=buildroot
	I0308 00:36:06.453809    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:36:06.453809    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:36:06.457759    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:36:06.457759    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:36:06.457945    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:36:06.458426    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:36:06.458426    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:36:06.459144    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:36:06.484248    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:36:06.528894    8176 start.go:296] duration metric: took 4.2801975s for postStartSetup
	I0308 00:36:06.528894    8176 fix.go:56] duration metric: took 1m17.3782186s for fixHost
	I0308 00:36:06.528894    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:08.364945    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:08.364945    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:08.374514    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:10.644834    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:10.644834    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:10.650763    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:10.651375    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:10.651400    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:36:10.782653    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858170.797430320
	
	I0308 00:36:10.782653    8176 fix.go:216] guest clock: 1709858170.797430320
	I0308 00:36:10.782653    8176 fix.go:229] Guest: 2024-03-08 00:36:10.79743032 +0000 UTC Remote: 2024-03-08 00:36:06.5288941 +0000 UTC m=+208.769560601 (delta=4.26853622s)
	I0308 00:36:10.782653    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:12.662073    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:12.662073    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:12.671760    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:14.912911    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:14.912911    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:14.928526    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:36:14.928736    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.50.67 22 <nil> <nil>}
	I0308 00:36:14.928736    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858170
	I0308 00:36:15.070433    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:36:10 UTC 2024
	
	I0308 00:36:15.070433    8176 fix.go:236] clock set: Fri Mar  8 00:36:10 UTC 2024
	 (err=<nil>)
	I0308 00:36:15.070433    8176 start.go:83] releasing machines lock for "multinode-397400-m02", held for 1m25.919677s
	I0308 00:36:15.071057    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:16.931460    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:16.931460    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:16.931611    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:19.219316    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:19.230693    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:19.230945    8176 out.go:177] * Found network options:
	I0308 00:36:19.235656    8176 out.go:177]   - NO_PROXY=172.20.61.151
	W0308 00:36:19.238019    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:36:19.240089    8176 out.go:177]   - NO_PROXY=172.20.61.151
	W0308 00:36:19.241028    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:36:19.241028    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:36:19.245975    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:36:19.245975    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:19.254420    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:36:19.254420    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:21.205917    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:21.213207    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:21.213207    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:21.230099    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:21.230099    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:21.231738    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:23.590096    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:23.600813    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:23.601260    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:23.622300    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:23.622300    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:23.623460    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:23.813299    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:36:23.813618    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0308 00:36:23.813724    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5677062s)
	I0308 00:36:23.813801    8176 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5592604s)
	W0308 00:36:23.813858    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:36:23.826444    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:36:23.843194    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:36:23.853245    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:36:23.853245    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:36:23.853416    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:36:23.888149    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:36:23.897644    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:36:23.928573    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:36:23.936016    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:36:23.957800    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:36:23.984856    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:36:24.017387    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:36:24.046880    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:36:24.073509    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:36:24.103017    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:36:24.132538    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:36:24.143973    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:36:24.160643    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:36:24.192019    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:24.360881    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:36:24.393885    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:36:24.410923    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:36:24.431080    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:36:24.431080    8176 command_runner.go:130] > [Unit]
	I0308 00:36:24.431080    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:36:24.431080    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:36:24.431080    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:36:24.431080    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:36:24.431080    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:36:24.431080    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:36:24.431080    8176 command_runner.go:130] > [Service]
	I0308 00:36:24.431694    8176 command_runner.go:130] > Type=notify
	I0308 00:36:24.431694    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:36:24.431694    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151
	I0308 00:36:24.431694    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:36:24.431747    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:36:24.431772    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:36:24.431772    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:36:24.431772    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:36:24.431904    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:36:24.431904    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:36:24.431945    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:36:24.431945    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:36:24.431945    8176 command_runner.go:130] > ExecStart=
	I0308 00:36:24.431980    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:36:24.431980    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:36:24.432019    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:36:24.432019    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:36:24.432019    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:36:24.432055    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:36:24.432055    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:36:24.432096    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:36:24.432096    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:36:24.432096    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:36:24.432096    8176 command_runner.go:130] > Delegate=yes
	I0308 00:36:24.432131    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:36:24.432131    8176 command_runner.go:130] > KillMode=process
	I0308 00:36:24.432131    8176 command_runner.go:130] > [Install]
	I0308 00:36:24.432173    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:36:24.443212    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:36:24.478573    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:36:24.521721    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:36:24.553443    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:36:24.586011    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:36:24.651351    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:36:24.672741    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:36:24.704854    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:36:24.716759    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:36:24.722392    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:36:24.733413    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:36:24.750143    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:36:24.794321    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:36:24.966303    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:36:25.125838    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:36:25.125908    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:36:25.166197    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:25.340343    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:36:26.904352    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.563957s)
	I0308 00:36:26.916489    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:36:26.949247    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:36:26.979878    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:36:27.150002    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:36:27.308625    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:27.477627    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:36:27.517767    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:36:27.549267    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:27.721282    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:36:27.815082    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:36:27.826154    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:36:27.833371    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:36:27.834503    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:36:27.834503    8176 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0308 00:36:27.834503    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:36:27.834503    8176 command_runner.go:130] > Access: 2024-03-08 00:36:27.759423013 +0000
	I0308 00:36:27.834588    8176 command_runner.go:130] > Modify: 2024-03-08 00:36:27.759423013 +0000
	I0308 00:36:27.834608    8176 command_runner.go:130] > Change: 2024-03-08 00:36:27.763423041 +0000
	I0308 00:36:27.834608    8176 command_runner.go:130] >  Birth: -
	I0308 00:36:27.834608    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:36:27.846885    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:36:27.849988    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:36:27.863585    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:36:27.930186    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:36:27.930294    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:36:27.930353    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:36:27.939128    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:36:27.967277    8176 command_runner.go:130] > 24.0.7
	I0308 00:36:27.976635    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:36:28.011011    8176 command_runner.go:130] > 24.0.7
	I0308 00:36:28.016997    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:36:28.022193    8176 out.go:177]   - env NO_PROXY=172.20.61.151
	I0308 00:36:28.025119    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:36:28.026887    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:36:28.029965    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:36:28.030240    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:36:28.030240    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:36:28.043325    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:36:28.049499    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:36:28.066841    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:36:28.067687    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:28.068374    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:29.942483    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:29.942483    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:29.953177    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:29.953925    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.50.67
	I0308 00:36:29.953925    8176 certs.go:194] generating shared ca certs ...
	I0308 00:36:29.953992    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:36:29.954636    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:36:29.954966    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:36:29.955175    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:36:29.955455    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:36:29.955753    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:36:29.955918    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:36:29.955918    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:36:29.956526    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:36:29.956767    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:36:29.956791    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:36:29.956791    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:36:29.957454    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:36:29.957488    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:36:29.957488    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:29.958147    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:36:29.958288    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:36:29.958467    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:36:30.003848    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:36:30.048433    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:36:30.090490    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:36:30.133399    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:36:30.173893    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:36:30.215607    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:36:30.266702    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:36:30.274022    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:36:30.283731    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:36:30.312333    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.318712    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.318712    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.328071    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:36:30.336920    8176 command_runner.go:130] > b5213941
	I0308 00:36:30.348845    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:36:30.377781    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:36:30.408676    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.411242    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.414871    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.425512    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:36:30.433383    8176 command_runner.go:130] > 51391683
	I0308 00:36:30.445073    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:36:30.471651    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:36:30.500178    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.503199    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.503199    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.517338    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:36:30.525569    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:36:30.535655    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:36:30.564860    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:36:30.566643    8176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:36:30.570242    8176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:36:30.570242    8176 kubeadm.go:928] updating node {m02 172.20.50.67 8443 v1.28.4 docker false true} ...
	I0308 00:36:30.570242    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.50.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:36:30.580199    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubeadm
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubectl
	I0308 00:36:30.589246    8176 command_runner.go:130] > kubelet
	I0308 00:36:30.589246    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:36:30.608965    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0308 00:36:30.625722    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0308 00:36:30.654436    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:36:30.693370    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:36:30.699245    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:36:30.726715    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:30.913741    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:36:30.940078    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:30.940421    8176 start.go:316] joinCluster: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.52.190 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:36:30.941075    8176 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:30.941075    8176 host.go:66] Checking if "multinode-397400-m02" exists ...
	I0308 00:36:30.941075    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:36:30.941915    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:30.942533    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:32.876755    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:32.886774    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:32.886774    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:36:32.887029    8176 api_server.go:166] Checking apiserver status ...
	I0308 00:36:32.898998    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:36:32.898998    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:34.784576    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:34.784576    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:34.795451    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:37.031844    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:36:37.031844    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:37.032363    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:36:37.148798    8176 command_runner.go:130] > 1978
	I0308 00:36:37.148909    8176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2498082s)
	I0308 00:36:37.160582    8176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup
	W0308 00:36:37.174692    8176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:36:37.184936    8176 ssh_runner.go:195] Run: ls
	I0308 00:36:37.191385    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:36:37.197501    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:36:37.208890    8176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0308 00:36:37.352830    8176 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jvzwq, kube-system/kube-proxy-gw9w9
	I0308 00:36:40.393632    8176 command_runner.go:130] > node/multinode-397400-m02 cordoned
	I0308 00:36:40.393765    8176 command_runner.go:130] > pod "busybox-5b5d89c9d6-ctt42" has DeletionTimestamp older than 1 seconds, skipping
	I0308 00:36:40.393765    8176 command_runner.go:130] > node/multinode-397400-m02 drained
	I0308 00:36:40.393765    8176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1848448s)
	I0308 00:36:40.393890    8176 node.go:125] successfully drained node "multinode-397400-m02"
	I0308 00:36:40.394014    8176 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0308 00:36:40.394104    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:36:42.240007    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:42.240007    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:42.250124    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:44.527473    8176 main.go:141] libmachine: [stdout =====>] : 172.20.50.67
	
	I0308 00:36:44.527473    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:44.527596    8176 sshutil.go:53] new ssh client: &{IP:172.20.50.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:36:44.944237    8176 command_runner.go:130] ! W0308 00:36:44.960939    1525 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0308 00:36:45.496320    8176 command_runner.go:130] ! W0308 00:36:45.511660    1525 cleanupnode.go:99] [reset] Failed to remove containers: failed to stop running pod e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67: output: E0308 00:36:45.214172    1589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-ctt42_default\" network: cni config uninitialized" podSandboxID="e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67"
	I0308 00:36:45.496320    8176 command_runner.go:130] ! time="2024-03-08T00:36:45Z" level=fatal msg="stopping the pod sandbox \"e1279312270ec03fb432b87f141ec78feaaaf402401a919ea8eb0ab2dbd02b67\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-5b5d89c9d6-ctt42_default\" network: cni config uninitialized"
	I0308 00:36:45.496320    8176 command_runner.go:130] ! : exit status 1
	I0308 00:36:45.518465    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Stopping the kubelet service
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0308 00:36:45.518465    8176 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0308 00:36:45.518465    8176 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0308 00:36:45.518465    8176 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0308 00:36:45.518465    8176 command_runner.go:130] > to reset your system's IPVS tables.
	I0308 00:36:45.518465    8176 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0308 00:36:45.518465    8176 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0308 00:36:45.518465    8176 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.1244031s)
	I0308 00:36:45.518465    8176 node.go:152] successfully reset node "multinode-397400-m02"
	I0308 00:36:45.519553    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:36:45.520638    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:36:45.521738    8176 cert_rotation.go:137] Starting client certificate rotation controller
	I0308 00:36:45.522344    8176 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0308 00:36:45.522606    8176 round_trippers.go:463] DELETE https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:45.522606    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:45.522606    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:45.522606    8176 round_trippers.go:473]     Content-Type: application/json
	I0308 00:36:45.522685    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:45.542363    8176 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0308 00:36:45.542501    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:45.542501    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:45.542501    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:45.542597    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:45.542597    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Content-Length: 171
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:45 GMT
	I0308 00:36:45.542597    8176 round_trippers.go:580]     Audit-Id: 1f27f500-c60c-431d-9201-eb33ffb7c616
	I0308 00:36:45.542709    8176 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-397400-m02","kind":"nodes","uid":"5a02943c-35bf-44c3-b1e0-997df2d7f70d"}}
	I0308 00:36:45.542807    8176 node.go:173] successfully deleted node "multinode-397400-m02"
	I0308 00:36:45.542878    8176 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:45.542936    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 00:36:45.543050    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:36:47.364924    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:36:47.375639    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:47.375639    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:36:49.617106    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:36:49.627809    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:36:49.627809    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:36:49.814817    8176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:36:49.814900    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2718977s)
	I0308 00:36:49.815007    8176 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:49.815091    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02"
	I0308 00:36:50.032201    8176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0308 00:36:52.324838    8176 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:36:52.327438    8176 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0308 00:36:52.327438    8176 command_runner.go:130] > This node has joined the cluster:
	I0308 00:36:52.327438    8176 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0308 00:36:52.327438    8176 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0308 00:36:52.327438    8176 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0308 00:36:52.327568    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 53hp1a.b7h9g76eoa0slcf9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m02": (2.5124531s)
	I0308 00:36:52.327641    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 00:36:52.523779    8176 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0308 00:36:52.728688    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400-m02 minikube.k8s.io/updated_at=2024_03_08T00_36_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=false
	I0308 00:36:52.865519    8176 command_runner.go:130] > node/multinode-397400-m02 labeled
	I0308 00:36:52.865619    8176 start.go:318] duration metric: took 21.9249897s to joinCluster
	I0308 00:36:52.865619    8176 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0308 00:36:52.870529    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:36:52.866337    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:52.883641    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:36:53.101431    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:36:53.136287    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:36:53.136946    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:36:53.137886    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:36:53.138090    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:53.138136    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:53.138136    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:53.138181    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:53.142716    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:36:53.142716    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Audit-Id: d642fff0-235e-4548-8168-848b99b36317
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:53.142716    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:53.142716    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:53.142716    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:53 GMT
	I0308 00:36:53.142716    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:53.641753    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:53.641753    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:53.641753    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:53.641753    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:53.642484    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:53.646014    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:53.646014    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:53.646014    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:53 GMT
	I0308 00:36:53.646014    8176 round_trippers.go:580]     Audit-Id: d698ab4e-8732-4cee-9c6e-de68792c624e
	I0308 00:36:53.646014    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:54.160444    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:54.160444    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:54.160444    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:54.160444    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:54.160977    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:54.164571    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:54.164571    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:54.164571    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:54 GMT
	I0308 00:36:54.164571    8176 round_trippers.go:580]     Audit-Id: d3529c63-ac76-439a-9500-192a0eabc119
	I0308 00:36:54.164813    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1900","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3687 chars]
	I0308 00:36:54.644205    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:54.644294    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:54.644294    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:54.644294    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:54.644731    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:54.649050    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:54.649050    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:54.649050    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:54 GMT
	I0308 00:36:54.649050    8176 round_trippers.go:580]     Audit-Id: 1ecf8ec5-60ec-4b8b-a696-27110ae0640e
	I0308 00:36:54.649050    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:55.153014    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:55.153014    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:55.153118    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:55.153118    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:55.153377    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:55.153377    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:55.153377    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:55.153377    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:55 GMT
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Audit-Id: 7df95c67-cbb5-4d1a-88d6-d60acd8c4306
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:55.153377    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:55.157394    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:55.157737    8176 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:36:55.640996    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:55.641067    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:55.641067    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:55.641067    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:55.641323    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:55.641323    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:55.644641    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:55.644641    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:55 GMT
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Audit-Id: f4a9e549-1ae4-406f-88e3-0fe28040b580
	I0308 00:36:55.644641    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:55.644867    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:56.140737    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:56.140829    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:56.140829    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:56.140829    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:56.142258    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:56.143929    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Audit-Id: ea139d21-04e2-4ac9-85ed-022dfc5b53de
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:56.143929    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:56.144002    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:56.144002    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:56.144002    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:56 GMT
	I0308 00:36:56.144002    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:56.649461    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:56.649670    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:56.649670    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:56.649670    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:56.650010    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:56.650010    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:56.650010    8176 round_trippers.go:580]     Audit-Id: e16a2e11-2624-4003-ba02-375cfac37da1
	I0308 00:36:56.650010    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:56.652903    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:56.652903    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:56.652903    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:56.652903    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:56 GMT
	I0308 00:36:56.653029    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.153713    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:57.153713    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:57.153713    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:57.153713    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:57.154258    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:57.154258    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:57.157052    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:57.157052    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:57 GMT
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Audit-Id: 6ffdc11e-7ed4-45db-a0b0-fd3f444d9b0a
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:57.157052    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:57.157246    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.642752    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:57.642752    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:57.642828    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:57.642828    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:57.643090    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:57.643090    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Audit-Id: d9e68acf-40ed-4eb9-8861-38bab5a7d765
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:57.643090    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:57.643090    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:57.643090    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:57 GMT
	I0308 00:36:57.646096    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1913","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3796 chars]
	I0308 00:36:57.646183    8176 node_ready.go:53] node "multinode-397400-m02" has status "Ready":"False"
	I0308 00:36:58.139920    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.139920    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.139920    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.139920    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.140722    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.143368    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Audit-Id: a87d373c-50a9-43ad-abb5-8bd8710616cd
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.143368    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.143368    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.143368    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.143586    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1925","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0308 00:36:58.143939    8176 node_ready.go:49] node "multinode-397400-m02" has status "Ready":"True"
	I0308 00:36:58.143939    8176 node_ready.go:38] duration metric: took 5.0059577s for node "multinode-397400-m02" to be "Ready" ...
	I0308 00:36:58.143939    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:36:58.143939    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:36:58.143939    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.143939    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.143939    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.144753    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.144753    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.144753    8176 round_trippers.go:580]     Audit-Id: dff7b66e-0987-414a-ba9a-20dea66dbeb2
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.149137    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.149137    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.149137    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.151102    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1927"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82549 chars]
	I0308 00:36:58.154982    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.155331    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:36:58.155331    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.155331    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.155331    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.158404    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.158404    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.158404    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.158404    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Audit-Id: a2190457-1023-4cb0-8349-1411f6ebedff
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.158404    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.158404    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:36:58.159117    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.159211    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.159211    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.159211    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.163947    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:36:58.163947    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Audit-Id: 45cee827-f036-47d8-b7c9-0e0a3a5ed34d
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.163947    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.163947    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.163947    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.163947    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.164651    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.164651    8176 pod_ready.go:81] duration metric: took 9.6687ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.164651    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.164651    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:36:58.164651    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.164651    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.164651    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.167280    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.167280    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Audit-Id: f7242511-c85c-4441-b950-792f16811bc0
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.167280    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.167280    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.167280    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.168812    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:36:58.168903    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.168903    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.168903    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.168903    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.171892    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:36:58.171998    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Audit-Id: a00f9dd1-98f0-482b-8890-9051cde55f76
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.171998    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.171998    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.172041    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.172041    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.172065    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.172719    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.172719    8176 pod_ready.go:81] duration metric: took 8.0676ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.172787    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.172898    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:36:58.172942    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.172942    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.172989    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.173720    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.173720    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Audit-Id: f86ef96a-2ce2-4795-8528-571963e40341
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.173720    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.173720    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.175804    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.175804    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.175882    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:36:58.176468    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.176468    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.176468    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.176468    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.177162    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.179084    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Audit-Id: 95f11dc7-68cf-4f37-ac55-f282c216ff10
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.179084    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.179165    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.179165    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.179165    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.179561    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.179854    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.179854    8176 pod_ready.go:81] duration metric: took 7.0668ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.179854    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.179854    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:36:58.179854    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.179854    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.179854    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.180671    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.180671    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.180671    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.180671    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Audit-Id: 9b72e4f6-391c-4b60-8577-225f365d58d5
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.180671    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.183280    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:36:58.183867    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:58.183867    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.183941    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.183941    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.185543    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:58.186956    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Audit-Id: b4fe4f9a-cfae-4a0a-9628-409d993ea51b
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.186956    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.186956    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.186956    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.187202    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:58.187232    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.187232    8176 pod_ready.go:81] duration metric: took 7.3778ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.187232    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.348406    8176 request.go:629] Waited for 161.1728ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:36:58.348670    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:36:58.348752    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.348752    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.348752    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.348949    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.348949    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.348949    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.348949    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.348949    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Audit-Id: 7089cce2-746e-4cfc-ae4d-e001ed2b7c0f
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.354829    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.355473    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"1907","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5538 chars]
	I0308 00:36:58.543785    8176 request.go:629] Waited for 188.1061ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.543958    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:36:58.543958    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.543958    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.544032    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.544794    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.544794    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Audit-Id: ecb7c41b-abd3-4524-b4df-5a308fbec085
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.544794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.544794    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.544794    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.547837    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1925","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0308 00:36:58.548314    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:58.548314    8176 pod_ready.go:81] duration metric: took 361.079ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.548436    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:58.759209    8176 request.go:629] Waited for 210.5405ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:36:58.759209    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:36:58.759436    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.759436    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.759436    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.760171    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.760171    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Audit-Id: 1bb11b25-337a-447b-a337-324b4d0777ee
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.760171    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.763105    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.763105    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.763105    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.763225    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"1626","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5767 chars]
	I0308 00:36:58.947384    8176 request.go:629] Waited for 183.2452ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:36:58.947518    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:36:58.947518    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:58.947518    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:58.947518    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:58.947851    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:58.951127    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Audit-Id: fd602a3e-e0bc-47d1-b17e-04dbc5ee4e60
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:58.951127    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:58.951127    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:58.951127    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:58 GMT
	I0308 00:36:58.951554    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e","resourceVersion":"1765","creationTimestamp":"2024-03-08T00:30:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_30_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:30:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0308 00:36:58.952041    8176 pod_ready.go:97] node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:36:58.952041    8176 pod_ready.go:81] duration metric: took 403.6011ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	E0308 00:36:58.952041    8176 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-397400-m03" hosting pod "kube-proxy-ktnrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-397400-m03" has status "Ready":"Unknown"
	I0308 00:36:58.952154    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.140523    8176 request.go:629] Waited for 188.1889ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:36:59.140523    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:36:59.140523    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.140523    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.140523    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.144435    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:36:59.146666    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.146666    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Audit-Id: 926aaf78-6241-4be7-bcb1-cdc8bd53047d
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.146666    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.146666    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.146900    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:36:59.342793    8176 request.go:629] Waited for 195.8912ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.343001    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.343117    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.343117    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.343117    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.349069    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:36:59.349069    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Audit-Id: d1b5ca68-eee3-4943-b4a6-263ee0ab1af6
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.349069    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.349069    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.349069    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.349069    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:59.349924    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:59.349924    8176 pod_ready.go:81] duration metric: took 397.766ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.349924    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.549671    8176 request.go:629] Waited for 199.3324ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:36:59.549702    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:36:59.549702    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.549702    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.549702    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.550871    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:36:59.550871    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Audit-Id: 468f48cd-1b06-4aa5-8fcf-d94054278419
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.550871    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.550871    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.550871    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.554221    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:36:59.745236    8176 request.go:629] Waited for 190.2545ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.745355    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:36:59.745355    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.745506    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.745506    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.745792    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:59.749049    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.749049    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.749132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.749132    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Audit-Id: 9984b193-d770-4951-ae38-45e827f98258
	I0308 00:36:59.749132    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.749132    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:36:59.749813    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:36:59.749813    8176 pod_ready.go:81] duration metric: took 399.8859ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:36:59.749813    8176 pod_ready.go:38] duration metric: took 1.6058586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:36:59.749813    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:36:59.762653    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:36:59.788133    8176 system_svc.go:56] duration metric: took 38.3199ms WaitForService to wait for kubelet
	I0308 00:36:59.788240    8176 kubeadm.go:576] duration metric: took 6.9224492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:36:59.788240    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:36:59.948084    8176 request.go:629] Waited for 159.4711ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:36:59.948289    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:36:59.948289    8176 round_trippers.go:469] Request Headers:
	I0308 00:36:59.948289    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:36:59.948289    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:36:59.948605    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:36:59.952470    8176 round_trippers.go:577] Response Headers:
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Audit-Id: 688aff5e-1497-40cd-8be8-f5bbd8e3cef7
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:36:59.952470    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:36:59.952470    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:36:59.952470    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:36:59 GMT
	I0308 00:36:59.953236    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1930"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15485 chars]
	I0308 00:36:59.953784    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.953784    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.954332    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:36:59.954332    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:36:59.954332    8176 node_conditions.go:105] duration metric: took 166.09ms to run NodePressure ...
	I0308 00:36:59.954332    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:36:59.954332    8176 start.go:254] writing updated cluster config ...
	I0308 00:36:59.958246    8176 out.go:177] 
	I0308 00:36:59.961232    8176 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:59.967583    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:36:59.967583    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:36:59.975242    8176 out.go:177] * Starting "multinode-397400-m03" worker node in "multinode-397400" cluster
	I0308 00:36:59.975577    8176 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 00:36:59.975577    8176 cache.go:56] Caching tarball of preloaded images
	I0308 00:36:59.978286    8176 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 00:36:59.978558    8176 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 00:36:59.978651    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:36:59.986898    8176 start.go:360] acquireMachinesLock for multinode-397400-m03: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 00:36:59.986898    8176 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-397400-m03"
	I0308 00:36:59.987517    8176 start.go:96] Skipping create...Using existing machine configuration
	I0308 00:36:59.987517    8176 fix.go:54] fixHost starting: m03
	I0308 00:36:59.987517    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:01.815288    8176 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:37:01.815288    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:01.825661    8176 fix.go:112] recreateIfNeeded on multinode-397400-m03: state=Stopped err=<nil>
	W0308 00:37:01.825661    8176 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 00:37:01.829418    8176 out.go:177] * Restarting existing hyperv VM for "multinode-397400-m03" ...
	I0308 00:37:01.832003    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-397400-m03
	I0308 00:37:04.617499    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:04.617499    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:04.617499    8176 main.go:141] libmachine: Waiting for host to start...
	I0308 00:37:04.627069    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:06.697701    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:06.708796    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:06.708863    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:08.913028    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:08.917118    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:09.920924    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:11.946471    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:11.946686    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:11.946686    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:14.212777    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:14.222616    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:15.232693    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:17.191145    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:17.191145    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:17.193453    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:19.421215    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:19.421215    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:20.430191    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:22.405409    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:22.405409    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:22.405871    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:24.705076    8176 main.go:141] libmachine: [stdout =====>] : 
	I0308 00:37:24.705076    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:25.719882    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:27.754910    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:27.754910    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:27.764685    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:30.033908    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:30.046843    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:30.050287    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:31.915340    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:31.928755    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:31.928818    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:34.126639    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:34.126639    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:34.137265    8176 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400\config.json ...
	I0308 00:37:34.139967    8176 machine.go:94] provisionDockerMachine start ...
	I0308 00:37:34.140094    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:36.010262    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:36.010262    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:36.020390    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:38.248762    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:38.248762    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:38.265133    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:38.265257    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:38.265257    8176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 00:37:38.392134    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 00:37:38.392226    8176 buildroot.go:166] provisioning hostname "multinode-397400-m03"
	I0308 00:37:38.392294    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:40.252188    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:40.262557    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:40.262557    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:42.496093    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:42.505054    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:42.511578    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:42.511713    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:42.511713    8176 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-397400-m03 && echo "multinode-397400-m03" | sudo tee /etc/hostname
	I0308 00:37:42.655016    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-397400-m03
	
	I0308 00:37:42.655112    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:44.522061    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:44.537500    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:44.537619    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:46.812705    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:46.812705    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:46.817520    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:37:46.818515    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:37:46.818515    8176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-397400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-397400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-397400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 00:37:46.958186    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 00:37:46.958186    8176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 00:37:46.958186    8176 buildroot.go:174] setting up certificates
	I0308 00:37:46.958186    8176 provision.go:84] configureAuth start
	I0308 00:37:46.958186    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:48.845567    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:48.845567    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:48.845761    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:51.134648    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:51.134866    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:51.134977    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:53.012572    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:53.012572    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:53.012656    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:55.309630    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:55.309694    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:55.309694    8176 provision.go:143] copyHostCerts
	I0308 00:37:55.309694    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0308 00:37:55.309694    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 00:37:55.309694    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 00:37:55.310460    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 00:37:55.311284    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0308 00:37:55.311825    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 00:37:55.311941    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 00:37:55.312000    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 00:37:55.313152    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0308 00:37:55.313393    8176 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 00:37:55.313449    8176 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 00:37:55.313449    8176 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 00:37:55.314312    8176 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-397400-m03 san=[127.0.0.1 172.20.53.127 localhost minikube multinode-397400-m03]
	I0308 00:37:55.739436    8176 provision.go:177] copyRemoteCerts
	I0308 00:37:55.756516    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 00:37:55.756642    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:37:57.641399    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:37:57.641458    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:57.641458    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:37:59.913352    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:37:59.913352    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:37:59.913626    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:00.015351    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2586563s)
	I0308 00:38:00.015351    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0308 00:38:00.015351    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 00:38:00.061523    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0308 00:38:00.061728    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0308 00:38:00.096723    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0308 00:38:00.102065    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 00:38:00.145010    8176 provision.go:87] duration metric: took 13.1866987s to configureAuth
	I0308 00:38:00.145010    8176 buildroot.go:189] setting minikube options for container-runtime
	I0308 00:38:00.145710    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:00.145854    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:02.046123    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:02.052762    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:02.052762    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:04.297886    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:04.307504    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:04.313436    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:04.313955    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:04.313955    8176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 00:38:04.436007    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 00:38:04.436007    8176 buildroot.go:70] root file system type: tmpfs
	I0308 00:38:04.436007    8176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 00:38:04.436539    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:06.348727    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:06.349003    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:06.349003    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:08.614664    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:08.614713    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:08.619662    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:08.619662    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:08.620181    8176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.61.151"
	Environment="NO_PROXY=172.20.61.151,172.20.50.67"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 00:38:08.762595    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.61.151
	Environment=NO_PROXY=172.20.61.151,172.20.50.67
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 00:38:08.762686    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:10.617689    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:10.623394    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:10.623482    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:12.872267    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:12.872267    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:12.883177    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:12.883982    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:12.884010    8176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 00:38:14.047323    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 00:38:14.047323    8176 machine.go:97] duration metric: took 39.9069196s to provisionDockerMachine
	I0308 00:38:14.047323    8176 start.go:293] postStartSetup for "multinode-397400-m03" (driver="hyperv")
	I0308 00:38:14.047323    8176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 00:38:14.062410    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 00:38:14.062410    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:15.925138    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:15.925138    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:15.925213    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:18.162111    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:18.171636    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:18.172065    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:18.273305    8176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.210855s)
	I0308 00:38:18.292569    8176 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 00:38:18.299161    8176 command_runner.go:130] > NAME=Buildroot
	I0308 00:38:18.299161    8176 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0308 00:38:18.299161    8176 command_runner.go:130] > ID=buildroot
	I0308 00:38:18.299161    8176 command_runner.go:130] > VERSION_ID=2023.02.9
	I0308 00:38:18.299161    8176 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0308 00:38:18.299161    8176 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 00:38:18.299276    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 00:38:18.299438    8176 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 00:38:18.300536    8176 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 00:38:18.300618    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /etc/ssl/certs/83242.pem
	I0308 00:38:18.309347    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 00:38:18.320905    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 00:38:18.368301    8176 start.go:296] duration metric: took 4.3209373s for postStartSetup
	I0308 00:38:18.368301    8176 fix.go:56] duration metric: took 1m18.3800388s for fixHost
	I0308 00:38:18.368301    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:20.201719    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:22.460042    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:22.463260    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:22.468142    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:22.468842    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:22.468842    8176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 00:38:22.594393    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709858302.608765026
	
	I0308 00:38:22.594393    8176 fix.go:216] guest clock: 1709858302.608765026
	I0308 00:38:22.594393    8176 fix.go:229] Guest: 2024-03-08 00:38:22.608765026 +0000 UTC Remote: 2024-03-08 00:38:18.3683013 +0000 UTC m=+340.607715401 (delta=4.240463726s)
	I0308 00:38:22.594393    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:24.448100    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:24.448100    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:24.448188    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:26.682698    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:26.682698    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:26.697628    8176 main.go:141] libmachine: Using SSH client type: native
	I0308 00:38:26.698495    8176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.127 22 <nil> <nil>}
	I0308 00:38:26.698495    8176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709858302
	I0308 00:38:26.834776    8176 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 00:38:22 UTC 2024
	
	I0308 00:38:26.834776    8176 fix.go:236] clock set: Fri Mar  8 00:38:22 UTC 2024
	 (err=<nil>)
	I0308 00:38:26.834776    8176 start.go:83] releasing machines lock for "multinode-397400-m03", held for 1m26.8470524s
	I0308 00:38:26.835321    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:28.706471    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:28.706471    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:28.716677    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:30.937753    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:30.939815    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:30.943138    8176 out.go:177] * Found network options:
	I0308 00:38:30.947229    8176 out.go:177]   - NO_PROXY=172.20.61.151,172.20.50.67
	W0308 00:38:30.949070    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.950090    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:38:30.952229    8176 out.go:177]   - NO_PROXY=172.20.61.151,172.20.50.67
	W0308 00:38:30.955254    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955254    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955653    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	W0308 00:38:30.955653    8176 proxy.go:119] fail to check proxy env: Error ip not in block
	I0308 00:38:30.956850    8176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 00:38:30.956850    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:30.960821    8176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 00:38:30.960821    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:32.931812    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:32.942554    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:32.942792    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:35.301201    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:35.307244    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:35.307244    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:35.319893    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:35.325168    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:35.325461    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:35.509221    8176 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0308 00:38:35.510086    8176 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5531934s)
	I0308 00:38:35.510160    8176 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0308 00:38:35.510160    8176 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5492966s)
	W0308 00:38:35.510160    8176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 00:38:35.522567    8176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 00:38:35.541070    8176 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0308 00:38:35.546072    8176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 00:38:35.546105    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:38:35.546268    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:38:35.574424    8176 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0308 00:38:35.587019    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 00:38:35.616742    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 00:38:35.634361    8176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 00:38:35.644270    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 00:38:35.683700    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:38:35.712918    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 00:38:35.741859    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 00:38:35.769916    8176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 00:38:35.804682    8176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 00:38:35.833964    8176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 00:38:35.836875    8176 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0308 00:38:35.861153    8176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 00:38:35.894963    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:36.087567    8176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 00:38:36.116454    8176 start.go:494] detecting cgroup driver to use...
	I0308 00:38:36.130495    8176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 00:38:36.151821    8176 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Unit]
	I0308 00:38:36.151821    8176 command_runner.go:130] > Description=Docker Application Container Engine
	I0308 00:38:36.151821    8176 command_runner.go:130] > Documentation=https://docs.docker.com
	I0308 00:38:36.151821    8176 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0308 00:38:36.151821    8176 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0308 00:38:36.151821    8176 command_runner.go:130] > StartLimitBurst=3
	I0308 00:38:36.151821    8176 command_runner.go:130] > StartLimitIntervalSec=60
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Service]
	I0308 00:38:36.151821    8176 command_runner.go:130] > Type=notify
	I0308 00:38:36.151821    8176 command_runner.go:130] > Restart=on-failure
	I0308 00:38:36.151821    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151
	I0308 00:38:36.151821    8176 command_runner.go:130] > Environment=NO_PROXY=172.20.61.151,172.20.50.67
	I0308 00:38:36.151821    8176 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0308 00:38:36.151821    8176 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0308 00:38:36.151821    8176 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0308 00:38:36.151821    8176 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0308 00:38:36.151821    8176 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0308 00:38:36.151821    8176 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0308 00:38:36.151821    8176 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecStart=
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0308 00:38:36.151821    8176 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0308 00:38:36.151821    8176 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitNOFILE=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitNPROC=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > LimitCORE=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0308 00:38:36.151821    8176 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0308 00:38:36.151821    8176 command_runner.go:130] > TasksMax=infinity
	I0308 00:38:36.151821    8176 command_runner.go:130] > TimeoutStartSec=0
	I0308 00:38:36.151821    8176 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0308 00:38:36.151821    8176 command_runner.go:130] > Delegate=yes
	I0308 00:38:36.151821    8176 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0308 00:38:36.151821    8176 command_runner.go:130] > KillMode=process
	I0308 00:38:36.151821    8176 command_runner.go:130] > [Install]
	I0308 00:38:36.151821    8176 command_runner.go:130] > WantedBy=multi-user.target
	I0308 00:38:36.162943    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:38:36.195879    8176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 00:38:36.226987    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 00:38:36.260137    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:38:36.290752    8176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 00:38:36.363249    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 00:38:36.383692    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 00:38:36.412595    8176 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0308 00:38:36.423699    8176 ssh_runner.go:195] Run: which cri-dockerd
	I0308 00:38:36.429893    8176 command_runner.go:130] > /usr/bin/cri-dockerd
	I0308 00:38:36.439866    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 00:38:36.457624    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 00:38:36.493586    8176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 00:38:36.649200    8176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 00:38:36.795818    8176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 00:38:36.795905    8176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 00:38:36.834893    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:36.999557    8176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 00:38:38.578421    8176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5788498s)
	I0308 00:38:38.588933    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 00:38:38.619856    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:38:38.650443    8176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 00:38:38.819227    8176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 00:38:38.979514    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:39.160602    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 00:38:39.210867    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 00:38:39.244985    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:39.421568    8176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 00:38:39.507838    8176 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 00:38:39.520401    8176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 00:38:39.530379    8176 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0308 00:38:39.530379    8176 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0308 00:38:39.530379    8176 command_runner.go:130] > Device: 0,22	Inode: 862         Links: 1
	I0308 00:38:39.530379    8176 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0308 00:38:39.530379    8176 command_runner.go:130] > Access: 2024-03-08 00:38:39.456636032 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] > Modify: 2024-03-08 00:38:39.456636032 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] > Change: 2024-03-08 00:38:39.459636053 +0000
	I0308 00:38:39.531908    8176 command_runner.go:130] >  Birth: -
	I0308 00:38:39.531953    8176 start.go:562] Will wait 60s for crictl version
	I0308 00:38:39.541992    8176 ssh_runner.go:195] Run: which crictl
	I0308 00:38:39.548555    8176 command_runner.go:130] > /usr/bin/crictl
	I0308 00:38:39.558585    8176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 00:38:39.624546    8176 command_runner.go:130] > Version:  0.1.0
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeName:  docker
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0308 00:38:39.626261    8176 command_runner.go:130] > RuntimeApiVersion:  v1
	I0308 00:38:39.626329    8176 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 00:38:39.634356    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:38:39.661502    8176 command_runner.go:130] > 24.0.7
	I0308 00:38:39.671048    8176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 00:38:39.699490    8176 command_runner.go:130] > 24.0.7
	I0308 00:38:39.703939    8176 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 00:38:39.707502    8176 out.go:177]   - env NO_PROXY=172.20.61.151
	I0308 00:38:39.709851    8176 out.go:177]   - env NO_PROXY=172.20.61.151,172.20.50.67
	I0308 00:38:39.711175    8176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 00:38:39.715783    8176 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 00:38:39.715783    8176 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 00:38:39.715783    8176 ip.go:210] interface addr: 172.20.48.1/20
	I0308 00:38:39.731062    8176 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 00:38:39.736154    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:38:39.754341    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:38:39.755024    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:39.755331    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:41.629166    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:41.629166    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:41.635956    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:41.636634    8176 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\multinode-397400 for IP: 172.20.53.127
	I0308 00:38:41.636634    8176 certs.go:194] generating shared ca certs ...
	I0308 00:38:41.636802    8176 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 00:38:41.636849    8176 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 00:38:41.637656    8176 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 00:38:41.637953    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0308 00:38:41.638260    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0308 00:38:41.638483    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0308 00:38:41.638698    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0308 00:38:41.639039    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 00:38:41.639039    8176 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 00:38:41.639615    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 00:38:41.639897    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 00:38:41.640782    8176 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem -> /usr/share/ca-certificates/8324.pem
	I0308 00:38:41.640782    8176 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.641561    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 00:38:41.688344    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 00:38:41.728697    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 00:38:41.768518    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 00:38:41.811304    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 00:38:41.850024    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 00:38:41.889072    8176 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 00:38:41.944033    8176 ssh_runner.go:195] Run: openssl version
	I0308 00:38:41.951478    8176 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0308 00:38:41.961195    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 00:38:41.991868    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.994064    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.994064    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 00:38:41.999821    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 00:38:42.010750    8176 command_runner.go:130] > 3ec20f2e
	I0308 00:38:42.026007    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 00:38:42.056713    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 00:38:42.085516    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.093101    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.093101    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.104757    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 00:38:42.112321    8176 command_runner.go:130] > b5213941
	I0308 00:38:42.122694    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 00:38:42.151961    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 00:38:42.181972    8176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.184088    8176 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.184088    8176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.198369    8176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 00:38:42.201513    8176 command_runner.go:130] > 51391683
	I0308 00:38:42.207168    8176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 00:38:42.243610    8176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 00:38:42.245901    8176 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:38:42.249343    8176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 00:38:42.249537    8176 kubeadm.go:928] updating node {m03 172.20.53.127 0 v1.28.4  false true} ...
	I0308 00:38:42.249843    8176 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-397400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.53.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 00:38:42.259227    8176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubeadm
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubectl
	I0308 00:38:42.280573    8176 command_runner.go:130] > kubelet
	I0308 00:38:42.280711    8176 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 00:38:42.291438    8176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0308 00:38:42.309265    8176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0308 00:38:42.335838    8176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 00:38:42.373710    8176 ssh_runner.go:195] Run: grep 172.20.61.151	control-plane.minikube.internal$ /etc/hosts
	I0308 00:38:42.379924    8176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 00:38:42.408877    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:38:42.577589    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:38:42.604547    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:42.604861    8176 start.go:316] joinCluster: &{Name:multinode-397400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-397400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.61.151 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.50.67 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 00:38:42.605447    8176 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:42.605617    8176 host.go:66] Checking if "multinode-397400-m03" exists ...
	I0308 00:38:42.606270    8176 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:38:42.606763    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:38:42.607503    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:44.546577    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:44.546577    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:44.546577    8176 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:38:44.556974    8176 api_server.go:166] Checking apiserver status ...
	I0308 00:38:44.567565    8176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:38:44.567565    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:46.466788    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:48.706501    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:38:48.715976    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:48.716151    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:38:48.821797    8176 command_runner.go:130] > 1978
	I0308 00:38:48.821797    8176 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2541911s)
	I0308 00:38:48.839262    8176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup
	W0308 00:38:48.856353    8176 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:38:48.868331    8176 ssh_runner.go:195] Run: ls
	I0308 00:38:48.874644    8176 api_server.go:253] Checking apiserver healthz at https://172.20.61.151:8443/healthz ...
	I0308 00:38:48.882227    8176 api_server.go:279] https://172.20.61.151:8443/healthz returned 200:
	ok
	I0308 00:38:48.897567    8176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-397400-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0308 00:38:49.030970    8176 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-srl7h, kube-system/kube-proxy-ktnrd
	I0308 00:38:49.042258    8176 command_runner.go:130] > node/multinode-397400-m03 cordoned
	I0308 00:38:49.044411    8176 command_runner.go:130] > node/multinode-397400-m03 drained
	I0308 00:38:49.044553    8176 node.go:125] successfully drained node "multinode-397400-m03"
	I0308 00:38:49.044553    8176 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0308 00:38:49.044553    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:38:50.947531    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:50.947531    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:50.957846    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m03 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:53.179365    8176 main.go:141] libmachine: [stdout =====>] : 172.20.53.127
	
	I0308 00:38:53.179365    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:53.190497    8176 sshutil.go:53] new ssh client: &{IP:172.20.53.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m03\id_rsa Username:docker}
	I0308 00:38:53.630212    8176 command_runner.go:130] ! W0308 00:38:53.645380    1473 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0308 00:38:54.012245    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Stopping the kubelet service
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0308 00:38:54.012245    8176 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0308 00:38:54.012245    8176 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0308 00:38:54.012245    8176 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0308 00:38:54.012245    8176 command_runner.go:130] > to reset your system's IPVS tables.
	I0308 00:38:54.012245    8176 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0308 00:38:54.012245    8176 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0308 00:38:54.012245    8176 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.967645s)
	I0308 00:38:54.012245    8176 node.go:152] successfully reset node "multinode-397400-m03"
	I0308 00:38:54.013808    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:38:54.014381    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:38:54.015400    8176 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0308 00:38:54.015485    8176 round_trippers.go:463] DELETE https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:38:54.015485    8176 round_trippers.go:469] Request Headers:
	I0308 00:38:54.015485    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:38:54.015485    8176 round_trippers.go:473]     Content-Type: application/json
	I0308 00:38:54.015485    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:38:54.026082    8176 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0308 00:38:54.033974    8176 round_trippers.go:577] Response Headers:
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:38:54.033974    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:38:54.033974    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Content-Length: 171
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:38:54 GMT
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Audit-Id: 4934f935-a258-48b1-960f-184d3168e43d
	I0308 00:38:54.033974    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:38:54.033974    8176 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-397400-m03","kind":"nodes","uid":"4a97100d-ade6-4031-b2fe-9e9ba736320e"}}
	I0308 00:38:54.033974    8176 node.go:173] successfully deleted node "multinode-397400-m03"
	I0308 00:38:54.033974    8176 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:54.033974    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0308 00:38:54.033974    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:38:55.904898    8176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:38:55.905110    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:55.905173    8176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:38:58.135211    8176 main.go:141] libmachine: [stdout =====>] : 172.20.61.151
	
	I0308 00:38:58.136861    8176 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:38:58.137333    8176 sshutil.go:53] new ssh client: &{IP:172.20.61.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:38:58.314671    8176 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 00:38:58.314766    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2807208s)
	I0308 00:38:58.314766    8176 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:38:58.314766    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m03"
	I0308 00:38:58.538177    8176 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 00:39:01.315807    8176 command_runner.go:130] > [preflight] Running pre-flight checks
	I0308 00:39:01.315950    8176 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0308 00:39:01.315950    8176 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0308 00:39:01.315950    8176 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0308 00:39:01.316055    8176 command_runner.go:130] > This node has joined the cluster:
	I0308 00:39:01.316055    8176 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0308 00:39:01.316055    8176 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0308 00:39:01.316098    8176 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0308 00:39:01.316098    8176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token okpz6a.0qop7h4cmrekc9k9 --discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-397400-m03": (3.0013042s)
	I0308 00:39:01.316186    8176 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0308 00:39:01.492435    8176 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0308 00:39:01.668894    8176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-397400-m03 minikube.k8s.io/updated_at=2024_03_08T00_39_01_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=multinode-397400 minikube.k8s.io/primary=false
	I0308 00:39:01.791222    8176 command_runner.go:130] > node/multinode-397400-m03 labeled
	I0308 00:39:01.791318    8176 start.go:318] duration metric: took 19.1862767s to joinCluster
	I0308 00:39:01.791452    8176 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.53.127 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0308 00:39:01.794049    8176 out.go:177] * Verifying Kubernetes components...
	I0308 00:39:01.791584    8176 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:39:01.806689    8176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 00:39:02.008686    8176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 00:39:02.036777    8176 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 00:39:02.038167    8176 kapi.go:59] client config for multinode-397400: &rest.Config{Host:"https://172.20.61.151:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\multinode-397400\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 00:39:02.039185    8176 node_ready.go:35] waiting up to 6m0s for node "multinode-397400-m03" to be "Ready" ...
	I0308 00:39:02.039721    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:02.039721    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:02.039721    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:02.039721    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:02.039969    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:02.039969    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:02.039969    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:02.039969    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:02 GMT
	I0308 00:39:02.039969    8176 round_trippers.go:580]     Audit-Id: 23174ea5-7c67-46fc-aea5-83801f390d38
	I0308 00:39:02.044369    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:02.542926    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:02.542978    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:02.543121    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:02.543121    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:02.547561    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:39:02.548568    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:02.548608    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:02.548608    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:02 GMT
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Audit-Id: 8d89c28e-be85-417d-8a7c-6df46ed7fce1
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:02.548608    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:02.548608    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:03.064069    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:03.064150    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:03.064182    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:03.064182    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:03.069919    8176 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0308 00:39:03.070960    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Audit-Id: 532b04f0-54db-4375-a964-70ca3487190f
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:03.071012    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:03.071012    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:03.071012    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:03.071065    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:03 GMT
	I0308 00:39:03.071111    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:03.543916    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:03.543916    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:03.543916    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:03.543916    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:03.544294    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:03.548677    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:03.548677    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:03.548677    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:03.548677    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:03 GMT
	I0308 00:39:03.548677    8176 round_trippers.go:580]     Audit-Id: 37b1f60b-908d-4dca-9bfd-3a29c979e3a1
	I0308 00:39:03.548779    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:03.548779    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:03.548905    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:04.047534    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:04.047534    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:04.047534    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:04.047626    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:04.049485    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:04.049485    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:04 GMT
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Audit-Id: 198cd6d3-8186-4f02-b63f-a7a36ad9901c
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:04.049485    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:04.049485    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:04.051958    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:04.052101    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2074","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3519 chars]
	I0308 00:39:04.052822    8176 node_ready.go:53] node "multinode-397400-m03" has status "Ready":"False"
	I0308 00:39:04.540796    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:04.541010    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:04.541010    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:04.541010    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:04.542904    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:04.542904    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:04.542904    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:04 GMT
	I0308 00:39:04.542904    8176 round_trippers.go:580]     Audit-Id: c85d7f40-d29a-407a-8a49-b1cc1ac7229e
	I0308 00:39:04.544328    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:04.544328    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:04.544328    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:04.544328    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:04.544520    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2089","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3628 chars]
	I0308 00:39:05.040045    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.040108    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.040108    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.040189    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.040996    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.040996    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.044118    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.044118    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.044118    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Audit-Id: 721d6ef2-096f-4b46-b530-1fef4408d295
	I0308 00:39:05.044187    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.044327    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2093","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0308 00:39:05.044573    8176 node_ready.go:49] node "multinode-397400-m03" has status "Ready":"True"
	I0308 00:39:05.044573    8176 node_ready.go:38] duration metric: took 3.0053597s for node "multinode-397400-m03" to be "Ready" ...
	I0308 00:39:05.044573    8176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:39:05.044573    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods
	I0308 00:39:05.044573    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.045153    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.045153    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.045321    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.045321    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.045321    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Audit-Id: 682b4eec-c630-43c5-b06f-3b8add619111
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.045321    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.045321    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.050626    8176 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2094"},"items":[{"metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82099 chars]
	I0308 00:39:05.054433    8176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.054433    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-w4hzh
	I0308 00:39:05.054433    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.054433    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.054433    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.055194    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.055194    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.055194    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.055194    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Audit-Id: 5d4c0bb1-f372-4287-8627-8d1d9a186415
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.055194    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.058437    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-w4hzh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d164fdff-2fa7-412c-86e6-f0fa957e0361","resourceVersion":"1757","creationTimestamp":"2024-03-08T00:13:52Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"533ba7e6-6e69-4f9c-951c-a1f68c26c44b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"533ba7e6-6e69-4f9c-951c-a1f68c26c44b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0308 00:39:05.059874    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.059942    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.059942    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.059942    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.062776    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.062776    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Audit-Id: 155c000d-e85f-45b3-bbba-09eff4673bc8
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.062776    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.062776    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.062776    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.063463    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.063718    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.063718    8176 pod_ready.go:92] pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.063718    8176 pod_ready.go:81] duration metric: took 9.2852ms for pod "coredns-5dd5756b68-w4hzh" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.063718    8176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.064270    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-397400
	I0308 00:39:05.064270    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.064270    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.064270    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.070662    8176 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0308 00:39:05.070785    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.070867    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.070867    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Audit-Id: 12133602-7b42-4f1c-bf0f-be7c93cf2f1f
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.070867    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.071422    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-397400","namespace":"kube-system","uid":"afdc3d40-e2cf-4751-9d88-09ecca9f4b0a","resourceVersion":"1768","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.61.151:2379","kubernetes.io/config.hash":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.mirror":"abda2d8551d533fb95e0af73524895b4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143833844Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0308 00:39:05.071602    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.071602    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.071602    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.071602    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.074798    8176 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0308 00:39:05.074798    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Audit-Id: 64baf02b-e56c-4067-9d8c-55fd6578aee6
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.075478    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.075478    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.075478    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.076721    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.077669    8176 pod_ready.go:92] pod "etcd-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.077669    8176 pod_ready.go:81] duration metric: took 13.9505ms for pod "etcd-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.077669    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.078431    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-397400
	I0308 00:39:05.078483    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.078533    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.078533    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.083537    8176 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0308 00:39:05.083537    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Audit-Id: b6943add-398b-4593-964d-980a161be401
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.083537    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.083537    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.083537    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.083537    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-397400","namespace":"kube-system","uid":"1e615aff-4d66-4ded-b27a-16bc990c80a6","resourceVersion":"1767","creationTimestamp":"2024-03-08T00:34:32Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.61.151:8443","kubernetes.io/config.hash":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.mirror":"941e6e54eb39aa6061734117d3d633a4","kubernetes.io/config.seen":"2024-03-08T00:34:26.143837944Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:34:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0308 00:39:05.084145    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.084145    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.084145    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.084145    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.087347    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.087347    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.087347    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Audit-Id: 1f0b5b2f-8ab9-412e-bec9-7c0e3d9d6cd9
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.087347    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.087347    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.087347    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.088020    8176 pod_ready.go:92] pod "kube-apiserver-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.088020    8176 pod_ready.go:81] duration metric: took 10.3511ms for pod "kube-apiserver-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.088020    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.088020    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-397400
	I0308 00:39:05.088020    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.088020    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.088020    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.088633    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.088633    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.088633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.088633    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Audit-Id: ae541466-7775-464c-9ce9-d7a996300698
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.088633    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.092213    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-397400","namespace":"kube-system","uid":"33cdb29c-e857-4fc2-b950-4fdde032852f","resourceVersion":"1769","creationTimestamp":"2024-03-08T00:13:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.mirror":"5197c047e228ee33ffa5159679dbef19","kubernetes.io/config.seen":"2024-03-08T00:13:39.441057580Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0308 00:39:05.092917    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:05.092917    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.092917    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.092917    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.094201    8176 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0308 00:39:05.094201    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Audit-Id: b2a01503-a08c-4bd2-8755-820705eee29d
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.094201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.094201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.094201    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.096855    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:05.097198    8176 pod_ready.go:92] pod "kube-controller-manager-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.097198    8176 pod_ready.go:81] duration metric: took 9.1777ms for pod "kube-controller-manager-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.097198    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.250441    8176 request.go:629] Waited for 153.1094ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:39:05.250561    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gw9w9
	I0308 00:39:05.250561    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.250561    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.250776    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.251486    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.251486    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Audit-Id: d9cba37b-2be3-4416-8aab-9394138986bc
	I0308 00:39:05.251486    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.253767    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.253767    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.253767    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.253939    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gw9w9","generateName":"kube-proxy-","namespace":"kube-system","uid":"9b5de9a2-0643-466e-9a31-4349596c0417","resourceVersion":"1907","creationTimestamp":"2024-03-08T00:16:35Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:16:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5538 chars]
	I0308 00:39:05.453024    8176 request.go:629] Waited for 198.8725ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:39:05.453109    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m02
	I0308 00:39:05.453109    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.453109    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.453109    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.453472    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.457565    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.457565    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.457565    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Audit-Id: 5d00296c-7cc7-437d-babd-9f162725960d
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.457565    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.457834    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m02","uid":"5fb6a564-8819-49ce-b395-6146a5cfcabd","resourceVersion":"1928","creationTimestamp":"2024-03-08T00:36:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_36_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0308 00:39:05.458189    8176 pod_ready.go:92] pod "kube-proxy-gw9w9" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.458274    8176 pod_ready.go:81] duration metric: took 361.0733ms for pod "kube-proxy-gw9w9" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.458274    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.646512    8176 request.go:629] Waited for 188.0842ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:39:05.646512    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktnrd
	I0308 00:39:05.646512    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.646512    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.646512    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.649201    8176 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0308 00:39:05.649201    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.649201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.649201    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.649201    8176 round_trippers.go:580]     Audit-Id: 7627af74-c76b-4918-9644-67af9a175448
	I0308 00:39:05.650436    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ktnrd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e76aaee4-f97d-4d55-b458-893eef62fb22","resourceVersion":"2080","creationTimestamp":"2024-03-08T00:20:50Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:20:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5542 chars]
	I0308 00:39:05.847000    8176 request.go:629] Waited for 195.6887ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.847165    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400-m03
	I0308 00:39:05.847165    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:05.847165    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:05.847165    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:05.847466    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:05.847466    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Audit-Id: d22e1496-65ac-4eb0-a128-3e7300ddb930
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:05.850593    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:05.850593    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:05.850593    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:05 GMT
	I0308 00:39:05.850752    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400-m03","uid":"f30f1193-5789-444b-b41b-a5fa0a74c1c7","resourceVersion":"2093","creationTimestamp":"2024-03-08T00:39:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_08T00_39_01_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:39:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3763 chars]
	I0308 00:39:05.850752    8176 pod_ready.go:92] pod "kube-proxy-ktnrd" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:05.850752    8176 pod_ready.go:81] duration metric: took 392.4739ms for pod "kube-proxy-ktnrd" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:05.850752    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.055802    8176 request.go:629] Waited for 204.8047ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:39:06.055802    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nt8td
	I0308 00:39:06.055802    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.055802    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.055802    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.056556    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.056556    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Audit-Id: e1c9f332-034b-48d0-91f5-239a75f84518
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.059538    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.059538    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.059538    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.059903    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nt8td","generateName":"kube-proxy-","namespace":"kube-system","uid":"dafb9385-fe20-4849-bd58-31dcf82b4a58","resourceVersion":"1674","creationTimestamp":"2024-03-08T00:13:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8567c6e5-8d32-4a6c-b405-f0a669e6749c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8567c6e5-8d32-4a6c-b405-f0a669e6749c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0308 00:39:06.243796    8176 request.go:629] Waited for 183.0542ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.243899    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.243899    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.243899    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.243899    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.244634    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.247744    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Audit-Id: befb9d79-5a49-4569-ba8e-cc8b676dc19c
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.247744    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.247810    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.247810    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.247862    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:06.248542    8176 pod_ready.go:92] pod "kube-proxy-nt8td" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:06.248622    8176 pod_ready.go:81] duration metric: took 397.8662ms for pod "kube-proxy-nt8td" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.248622    8176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.452531    8176 request.go:629] Waited for 203.6749ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:39:06.452735    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-397400
	I0308 00:39:06.452868    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.452868    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.452868    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.453157    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.455906    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Audit-Id: 315d9ecb-5318-47b0-99c7-edd9e310ec3a
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.455906    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.455906    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.455906    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.456159    8176 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-397400","namespace":"kube-system","uid":"3f029955-80be-4e3d-a157-faec2631b9b8","resourceVersion":"1744","creationTimestamp":"2024-03-08T00:13:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.mirror":"1d4f0572cc1a3c162f0a67765e3eb0ab","kubernetes.io/config.seen":"2024-03-08T00:13:30.884647825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-08T00:13:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0308 00:39:06.653600    8176 request.go:629] Waited for 196.8943ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.653600    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes/multinode-397400
	I0308 00:39:06.653600    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.653600    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.653600    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.654532    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.654532    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.654532    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.654532    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Audit-Id: d1704fb1-0342-4ade-85f4-57c7510d846d
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.657195    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.657372    8176 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-03-08T00:13:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0308 00:39:06.657517    8176 pod_ready.go:92] pod "kube-scheduler-multinode-397400" in "kube-system" namespace has status "Ready":"True"
	I0308 00:39:06.657517    8176 pod_ready.go:81] duration metric: took 408.8915ms for pod "kube-scheduler-multinode-397400" in "kube-system" namespace to be "Ready" ...
	I0308 00:39:06.657517    8176 pod_ready.go:38] duration metric: took 1.6129286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 00:39:06.657517    8176 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 00:39:06.667961    8176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:39:06.690680    8176 system_svc.go:56] duration metric: took 33.1621ms WaitForService to wait for kubelet
	I0308 00:39:06.690680    8176 kubeadm.go:576] duration metric: took 4.899087s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 00:39:06.690680    8176 node_conditions.go:102] verifying NodePressure condition ...
	I0308 00:39:06.847861    8176 request.go:629] Waited for 156.9418ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.61.151:8443/api/v1/nodes
	I0308 00:39:06.847861    8176 round_trippers.go:463] GET https://172.20.61.151:8443/api/v1/nodes
	I0308 00:39:06.848014    8176 round_trippers.go:469] Request Headers:
	I0308 00:39:06.848014    8176 round_trippers.go:473]     Accept: application/json, */*
	I0308 00:39:06.848014    8176 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0308 00:39:06.848350    8176 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0308 00:39:06.848350    8176 round_trippers.go:577] Response Headers:
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Content-Type: application/json
	I0308 00:39:06.852343    8176 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d29ff755-d50f-42f0-bcec-6b5f8f4fd1b0
	I0308 00:39:06.852343    8176 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 570418b9-f401-46c4-9776-c107e961ec64
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Date: Fri, 08 Mar 2024 00:39:06 GMT
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Audit-Id: c7b16980-97ec-44fa-b493-715e62ea0e49
	I0308 00:39:06.852343    8176 round_trippers.go:580]     Cache-Control: no-cache, private
	I0308 00:39:06.853347    8176 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2095"},"items":[{"metadata":{"name":"multinode-397400","uid":"94a1e556-366a-4bf5-b5f6-8c85d19f5149","resourceVersion":"1678","creationTimestamp":"2024-03-08T00:13:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-397400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e9c2ced1cce693d4d04abc192b43cb5294694bbd","minikube.k8s.io/name":"multinode-397400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_08T00_13_40_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14849 chars]
	I0308 00:39:06.853631    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 00:39:06.854159    8176 node_conditions.go:123] node cpu capacity is 2
	I0308 00:39:06.854159    8176 node_conditions.go:105] duration metric: took 163.478ms to run NodePressure ...
	I0308 00:39:06.854159    8176 start.go:240] waiting for startup goroutines ...
	I0308 00:39:06.854292    8176 start.go:254] writing updated cluster config ...
	I0308 00:39:06.866206    8176 ssh_runner.go:195] Run: rm -f paused
	I0308 00:39:06.994865    8176 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 00:39:07.001960    8176 out.go:177] * Done! kubectl is now configured to use "multinode-397400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.369497695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.369516495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.370214098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374438817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374570917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.374791918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.375162420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 cri-dockerd[1249]: time="2024-03-08T00:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd3961aae453d674fbd9879978f2edf781424559bee763553ecc0b5480320532/resolv.conf as [nameserver 172.20.48.1]"
	Mar 08 00:34:40 multinode-397400 cri-dockerd[1249]: time="2024-03-08T00:34:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d97a9e240282efa34aeaa8b7d8b28489a577c9159a13eed18fd34ff81cf6b847/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835757400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835865801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835882501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.835975901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912371346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912564047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.912724548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:34:40 multinode-397400 dockerd[1041]: time="2024-03-08T00:34:40.913086850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:35:03 multinode-397400 dockerd[1035]: time="2024-03-08T00:35:03.751092590Z" level=info msg="ignoring event" container=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.752938199Z" level=info msg="shim disconnected" id=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 namespace=moby
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.753088400Z" level=warning msg="cleaning up after shim disconnected" id=31baaa0408128be77387f40597623f6920d87dac0b5e733b0ef7022ae5df8c58 namespace=moby
	Mar 08 00:35:03 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:03.753099500Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412792964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412855364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.412867664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 00:35:19 multinode-397400 dockerd[1041]: time="2024-03-08T00:35:19.413080865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45f94fda9ca26       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   d45a9b335323c       storage-provisioner
	0c3e8474c735a       8c811b4aec35f                                                                                         6 minutes ago       Running             busybox                   1                   d97a9e240282e       busybox-5b5d89c9d6-j7ck4
	58f69bbde10c9       ead0a4a53df89                                                                                         6 minutes ago       Running             coredns                   1                   bd3961aae453d       coredns-5dd5756b68-w4hzh
	9dacbf05ab6e1       4950bb10b3f87                                                                                         6 minutes ago       Running             kindnet-cni               1                   a3a9d8e6a117e       kindnet-wkwtm
	31baaa0408128       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   d45a9b335323c       storage-provisioner
	e7bc69da51949       83f6cc407eed8                                                                                         6 minutes ago       Running             kube-proxy                1                   f639fb3711ca7       kube-proxy-nt8td
	2bc9651e0b360       73deb9a3f7025                                                                                         6 minutes ago       Running             etcd                      0                   45c6fc79a1b4d       etcd-multinode-397400
	3947d85995668       e3db313c6dbc0                                                                                         6 minutes ago       Running             kube-scheduler            1                   6436a4df84b2c       kube-scheduler-multinode-397400
	ddd59e5b2501e       7fe0e6f37db33                                                                                         6 minutes ago       Running             kube-apiserver            0                   df28fa2acee46       kube-apiserver-multinode-397400
	df7b64a1988a8       d058aa5ab969c                                                                                         6 minutes ago       Running             kube-controller-manager   1                   b272848c66a23       kube-controller-manager-multinode-397400
	ce9a9bc4cfe37       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   cdb14ba552809       busybox-5b5d89c9d6-j7ck4
	b8903699a2e38       ead0a4a53df89                                                                                         27 minutes ago      Exited              coredns                   0                   13e6ea5ce4bdc       coredns-5dd5756b68-w4hzh
	91ada1ebb521d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Exited              kindnet-cni               0                   90ba9a9d99a3d       kindnet-wkwtm
	79433b5ca644a       83f6cc407eed8                                                                                         27 minutes ago      Exited              kube-proxy                0                   9c957cee5d35c       kube-proxy-nt8td
	0aaf57b801fb8       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   d4b57713d4316       kube-scheduler-multinode-397400
	4f8851b134589       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   ead2ed31c6b3d       kube-controller-manager-multinode-397400
	
	
	==> coredns [58f69bbde10c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44776 - 53642 "HINFO IN 4310211516712145791.863761266172721005. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.145987054s
	
	
	==> coredns [b8903699a2e3] <==
	[INFO] 10.244.0.3:34101 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146601s
	[INFO] 10.244.0.3:39343 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125001s
	[INFO] 10.244.0.3:51579 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202401s
	[INFO] 10.244.0.3:34574 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000234402s
	[INFO] 10.244.0.3:41474 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161301s
	[INFO] 10.244.0.3:56490 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117701s
	[INFO] 10.244.0.3:47237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125501s
	[INFO] 10.244.1.2:57949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186801s
	[INFO] 10.244.1.2:51978 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082601s
	[INFO] 10.244.1.2:53464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123401s
	[INFO] 10.244.1.2:60851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124401s
	[INFO] 10.244.0.3:47849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000966s
	[INFO] 10.244.0.3:33374 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329903s
	[INFO] 10.244.0.3:33498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231301s
	[INFO] 10.244.0.3:49302 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158701s
	[INFO] 10.244.1.2:57262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157901s
	[INFO] 10.244.1.2:56667 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000185301s
	[INFO] 10.244.1.2:47521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000193002s
	[INFO] 10.244.1.2:51329 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000258401s
	[INFO] 10.244.0.3:49110 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166601s
	[INFO] 10.244.0.3:55134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128401s
	[INFO] 10.244.0.3:43988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051301s
	[INFO] 10.244.0.3:49870 - 5 "PTR IN 1.48.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000082101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-397400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_08T00_13_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:13:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:40:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:39:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:39:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:39:36 +0000   Fri, 08 Mar 2024 00:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:39:36 +0000   Fri, 08 Mar 2024 00:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.61.151
	  Hostname:    multinode-397400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58bfd6541cf46d6b45a73ca4f8c85e6
	  System UUID:                8391dbcb-b4b7-5845-b9ff-a5eba8cddcb5
	  Boot ID:                    9b542d52-a0e2-458a-8d24-b3ad596c9f52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-j7ck4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-w4hzh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-397400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  kube-system                 kindnet-wkwtm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-397400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-multinode-397400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-nt8td                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-397400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 6m33s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-397400 event: Registered Node multinode-397400 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-397400 status is now: NodeReady
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m41s (x8 over 6m41s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x8 over 6m41s)  kubelet          Node multinode-397400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x7 over 6m41s)  kubelet          Node multinode-397400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node multinode-397400 event: Registered Node multinode-397400 in Controller
	
	
	Name:               multinode-397400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-397400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd
	                    minikube.k8s.io/name=multinode-397400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_08T00_36_52_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 08 Mar 2024 00:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-397400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 08 Mar 2024 00:41:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 08 Mar 2024 00:36:57 +0000   Fri, 08 Mar 2024 00:36:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.50.67
	  Hostname:    multinode-397400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 28dd8cb4d1cf408a8d14fae89f734da5
	  System UUID:                12e9ba38-a8d8-e14f-9556-c9cd17fe7785
	  Boot ID:                    23f89f6e-fbed-4b79-bf6a-26ee3d3f8c37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-84btt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kindnet-jvzwq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-gw9w9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x5 over 24m)      kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x5 over 24m)      kubelet          Node multinode-397400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x5 over 24m)      kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24m                    kubelet          Node multinode-397400-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m15s (x5 over 4m17s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x5 over 4m17s)  kubelet          Node multinode-397400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x5 over 4m17s)  kubelet          Node multinode-397400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m13s                  node-controller  Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller
	  Normal  NodeReady                4m10s                  kubelet          Node multinode-397400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[Mar 8 00:33] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.234588] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.913526] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +6.040163] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +44.253272] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.137491] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Mar 8 00:34] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[  +0.089400] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.473836] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.146970] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.171341] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +1.880514] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.157597] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.158418] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.229010] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.767976] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +3.619826] systemd-fstab-generator[1504]: Ignoring "noauto" option for root device
	[  +0.087527] kauditd_printk_skb: 227 callbacks suppressed
	[  +7.009284] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.570286] systemd-fstab-generator[3205]: Ignoring "noauto" option for root device
	[  +0.132700] kauditd_printk_skb: 48 callbacks suppressed
	[Mar 8 00:35] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [2bc9651e0b36] <==
	{"level":"info","ts":"2024-03-08T00:34:28.177531Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"701237fd4f62c309","initial-advertise-peer-urls":["https://172.20.61.151:2380"],"listen-peer-urls":["https://172.20.61.151:2380"],"advertise-client-urls":["https://172.20.61.151:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.61.151:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T00:34:28.177621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T00:34:28.247261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 switched to configuration voters=(8075578642926846729)"}
	{"level":"info","ts":"2024-03-08T00:34:28.24743Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e4eb1942c73643","local-member-id":"701237fd4f62c309","added-peer-id":"701237fd4f62c309","added-peer-peer-urls":["https://172.20.48.212:2380"]}
	{"level":"info","ts":"2024-03-08T00:34:28.248025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e4eb1942c73643","local-member-id":"701237fd4f62c309","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T00:34:28.24806Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T00:34:28.24817Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.61.151:2380"}
	{"level":"info","ts":"2024-03-08T00:34:28.2482Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.61.151:2380"}
	{"level":"info","ts":"2024-03-08T00:34:28.251528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:28.251814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:28.252031Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T00:34:29.921158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 received MsgPreVoteResp from 701237fd4f62c309 at term 2"}
	{"level":"info","ts":"2024-03-08T00:34:29.921577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 received MsgVoteResp from 701237fd4f62c309 at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"701237fd4f62c309 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.921623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 701237fd4f62c309 elected leader 701237fd4f62c309 at term 3"}
	{"level":"info","ts":"2024-03-08T00:34:29.926172Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"701237fd4f62c309","local-member-attributes":"{Name:multinode-397400 ClientURLs:[https://172.20.61.151:2379]}","request-path":"/0/members/701237fd4f62c309/attributes","cluster-id":"1e4eb1942c73643","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T00:34:29.926197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T00:34:29.926519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T00:34:29.928045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T00:34:29.927597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.61.151:2379"}
	{"level":"info","ts":"2024-03-08T00:34:29.928924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T00:34:29.929148Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:41:08 up 8 min,  0 users,  load average: 0.16, 0.43, 0.27
	Linux multinode-397400 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [91ada1ebb521] <==
	I0308 00:31:32.130125       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:31:42.144211       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:31:42.144319       1 main.go:227] handling current node
	I0308 00:31:42.144332       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:31:42.144342       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:31:42.144702       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:31:42.144780       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:31:52.150046       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:31:52.150087       1 main.go:227] handling current node
	I0308 00:31:52.150099       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:31:52.150107       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:31:52.150747       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:31:52.150953       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:32:02.471314       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:32:02.471359       1 main.go:227] handling current node
	I0308 00:32:02.471430       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:32:02.471457       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:32:02.471613       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:32:02.471646       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	I0308 00:32:12.479491       1 main.go:223] Handling node with IPs: map[172.20.48.212:{}]
	I0308 00:32:12.480248       1 main.go:227] handling current node
	I0308 00:32:12.480323       1 main.go:223] Handling node with IPs: map[172.20.61.226:{}]
	I0308 00:32:12.480354       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:32:12.480646       1 main.go:223] Handling node with IPs: map[172.20.52.190:{}]
	I0308 00:32:12.480864       1 main.go:250] Node multinode-397400-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9dacbf05ab6e] <==
	I0308 00:40:04.867706       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:40:14.878573       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:40:14.878672       1 main.go:227] handling current node
	I0308 00:40:14.878685       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:40:14.878693       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:40:24.885982       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:40:24.886081       1 main.go:227] handling current node
	I0308 00:40:24.886095       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:40:24.886102       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:40:34.892885       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:40:34.892932       1 main.go:227] handling current node
	I0308 00:40:34.892944       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:40:34.892951       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:40:44.904216       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:40:44.904394       1 main.go:227] handling current node
	I0308 00:40:44.904409       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:40:44.904418       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:40:54.917798       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:40:54.917921       1 main.go:227] handling current node
	I0308 00:40:54.917934       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:40:54.917941       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	I0308 00:41:04.931916       1 main.go:223] Handling node with IPs: map[172.20.61.151:{}]
	I0308 00:41:04.932044       1 main.go:227] handling current node
	I0308 00:41:04.932057       1 main.go:223] Handling node with IPs: map[172.20.50.67:{}]
	I0308 00:41:04.932065       1 main.go:250] Node multinode-397400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ddd59e5b2501] <==
	I0308 00:34:31.379349       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0308 00:34:31.380093       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0308 00:34:31.380256       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0308 00:34:31.419934       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 00:34:31.421611       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 00:34:31.422873       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 00:34:31.425124       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 00:34:31.425221       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 00:34:31.425322       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 00:34:31.425509       1 aggregator.go:166] initial CRD sync complete...
	I0308 00:34:31.425578       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 00:34:31.425586       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 00:34:31.425592       1 cache.go:39] Caches are synced for autoregister controller
	I0308 00:34:31.426446       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 00:34:31.468358       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 00:34:31.487371       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 00:34:32.336480       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 00:34:32.871557       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.20.61.151]
	I0308 00:34:32.872892       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 00:34:32.885117       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 00:34:34.720003       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 00:34:34.896027       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 00:34:34.909366       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 00:34:35.017904       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 00:34:35.038760       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4f8851b13458] <==
	I0308 00:17:20.176000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.976786ms"
	I0308 00:17:20.176273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.1µs"
	I0308 00:20:50.158570       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:20:50.159696       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:20:50.183629       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ktnrd"
	I0308 00:20:50.183663       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-srl7h"
	I0308 00:20:50.194174       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.2.0/24"]
	I0308 00:20:51.432910       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-397400-m03"
	I0308 00:20:51.432983       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:21:07.481594       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:28:11.562720       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:28:11.563273       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-397400-m03 status is now: NodeNotReady"
	I0308 00:28:11.585531       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ktnrd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:28:11.603986       1 event.go:307] "Event occurred" object="kube-system/kindnet-srl7h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:30:24.270272       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:30:26.631888       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m03 event: Removing Node multinode-397400-m03 from Controller"
	I0308 00:30:29.668520       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:30:29.669558       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:30:29.679555       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.3.0/24"]
	I0308 00:30:31.632782       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:30:35.024823       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:32:01.715054       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:32:01.716052       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-397400-m03 status is now: NodeNotReady"
	I0308 00:32:02.082918       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ktnrd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0308 00:32:02.470368       1 event.go:307] "Event occurred" object="kube-system/kindnet-srl7h" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [df7b64a1988a] <==
	I0308 00:36:52.705806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="71.4µs"
	I0308 00:36:54.138570       1 event.go:307] "Event occurred" object="multinode-397400-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m02 event: Registered Node multinode-397400-m02 in Controller"
	I0308 00:36:57.941544       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:36:57.973235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.301µs"
	I0308 00:36:59.162872       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ctt42" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ctt42"
	I0308 00:37:04.792011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.9µs"
	I0308 00:37:04.804939       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="176.801µs"
	I0308 00:37:04.825775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.8µs"
	I0308 00:37:04.927524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="106.401µs"
	I0308 00:37:04.936931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44µs"
	I0308 00:37:05.963144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.190865ms"
	I0308 00:37:05.963667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.2µs"
	I0308 00:38:54.049062       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:38:54.186832       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m03 event: Removing Node multinode-397400-m03 from Controller"
	I0308 00:39:01.188836       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:39:01.189397       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-397400-m03\" does not exist"
	I0308 00:39:01.209039       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-397400-m03" podCIDRs=["10.244.2.0/24"]
	I0308 00:39:04.188687       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-397400-m03 event: Registered Node multinode-397400-m03 in Controller"
	I0308 00:39:04.587445       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:39:57.087550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-397400-m02"
	I0308 00:39:59.220698       1 event.go:307] "Event occurred" object="multinode-397400-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-397400-m03 event: Removing Node multinode-397400-m03 from Controller"
	I0308 00:40:44.000567       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-ktnrd"
	I0308 00:40:44.047854       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-ktnrd"
	I0308 00:40:44.048012       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-srl7h"
	I0308 00:40:44.095931       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-srl7h"
	
	
	==> kube-proxy [79433b5ca644] <==
	I0308 00:13:54.006048       1 server_others.go:69] "Using iptables proxy"
	I0308 00:13:54.040499       1 node.go:141] Successfully retrieved node IP: 172.20.48.212
	I0308 00:13:54.095908       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 00:13:54.096005       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 00:13:54.101982       1 server_others.go:152] "Using iptables Proxier"
	I0308 00:13:54.102091       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 00:13:54.102846       1 server.go:846] "Version info" version="v1.28.4"
	I0308 00:13:54.102861       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:13:54.104235       1 config.go:315] "Starting node config controller"
	I0308 00:13:54.104569       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 00:13:54.105241       1 config.go:97] "Starting endpoint slice config controller"
	I0308 00:13:54.106017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 00:13:54.106286       1 config.go:188] "Starting service config controller"
	I0308 00:13:54.106444       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 00:13:54.205614       1 shared_informer.go:318] Caches are synced for node config
	I0308 00:13:54.206939       1 shared_informer.go:318] Caches are synced for service config
	I0308 00:13:54.206988       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e7bc69da5194] <==
	I0308 00:34:33.859531       1 server_others.go:69] "Using iptables proxy"
	I0308 00:34:33.939662       1 node.go:141] Successfully retrieved node IP: 172.20.61.151
	I0308 00:34:34.048460       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 00:34:34.048502       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 00:34:34.058077       1 server_others.go:152] "Using iptables Proxier"
	I0308 00:34:34.059355       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 00:34:34.060795       1 server.go:846] "Version info" version="v1.28.4"
	I0308 00:34:34.060831       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:34:34.068894       1 config.go:188] "Starting service config controller"
	I0308 00:34:34.070316       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 00:34:34.070384       1 config.go:97] "Starting endpoint slice config controller"
	I0308 00:34:34.070519       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 00:34:34.074000       1 config.go:315] "Starting node config controller"
	I0308 00:34:34.074036       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 00:34:34.171337       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 00:34:34.171644       1 shared_informer.go:318] Caches are synced for service config
	I0308 00:34:34.174768       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0aaf57b801fb] <==
	E0308 00:13:36.477702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 00:13:36.525082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 00:13:36.525124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 00:13:36.600953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 00:13:36.601042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 00:13:36.636085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0308 00:13:36.636109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0308 00:13:36.684531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 00:13:36.684579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 00:13:36.716028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.716307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.848521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0308 00:13:36.848602       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0308 00:13:36.900721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 00:13:36.900908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 00:13:36.942519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 00:13:36.942753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 00:13:36.951164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 00:13:36.951329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 00:13:36.977745       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0308 00:13:36.977888       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0308 00:13:39.884202       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 00:32:17.869313       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0308 00:32:17.869458       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0308 00:32:17.869692       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3947d8599566] <==
	I0308 00:34:29.069311       1 serving.go:348] Generated self-signed cert in-memory
	W0308 00:34:31.393552       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0308 00:34:31.393586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0308 00:34:31.393596       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0308 00:34:31.393602       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0308 00:34:31.421426       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0308 00:34:31.421446       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 00:34:31.424864       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0308 00:34:31.425239       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0308 00:34:31.426003       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0308 00:34:31.427938       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0308 00:34:31.526392       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 00:36:26 multinode-397400 kubelet[1511]: E0308 00:36:26.277114    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:36:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:37:26 multinode-397400 kubelet[1511]: E0308 00:37:26.279458    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:37:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:38:26 multinode-397400 kubelet[1511]: E0308 00:38:26.279626    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:38:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:39:26 multinode-397400 kubelet[1511]: E0308 00:39:26.278954    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:39:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:39:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:39:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:39:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 08 00:40:26 multinode-397400 kubelet[1511]: E0308 00:40:26.278346    1511 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 08 00:40:26 multinode-397400 kubelet[1511]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 08 00:40:26 multinode-397400 kubelet[1511]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 08 00:40:26 multinode-397400 kubelet[1511]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 08 00:40:26 multinode-397400 kubelet[1511]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:41:00.603976     784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-397400 -n multinode-397400: (10.7838063s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-397400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (39.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (307.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-463800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-463800 --driver=hyperv: exit status 1 (4m59.673165s)

                                                
                                                
-- stdout --
	* [NoKubernetes-463800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16214
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-463800" primary control-plane node in "NoKubernetes-463800" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:56:23.708492    7444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-463800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-463800 -n NoKubernetes-463800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-463800 -n NoKubernetes-463800: exit status 7 (7.51446s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:01:23.390782   10400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0308 01:01:30.749693   10400 status.go:352] failed to get driver ip: getting IP: IP not found
	E0308 01:01:30.749693   10400 status.go:249] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-463800" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (307.19s)

                                                
                                    
x
+
TestPause/serial/Unpause (112.28s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-549000 --alsologtostderr -v=5
pause_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p pause-549000 --alsologtostderr -v=5: exit status 1 (5.7859851s)

                                                
                                                
-- stdout --
	* Unpausing node pause-549000 ... 

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:34:03.639820   10208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0308 01:34:03.748103   10208 out.go:291] Setting OutFile to fd 1976 ...
	I0308 01:34:03.751732   10208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:34:03.751732   10208 out.go:304] Setting ErrFile to fd 1812...
	I0308 01:34:03.751732   10208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:34:03.768050   10208 mustload.go:65] Loading cluster: pause-549000
	I0308 01:34:03.768723   10208 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:34:03.769620   10208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:34:06.235395   10208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:34:06.235496   10208 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:34:06.235496   10208 host.go:66] Checking if "pause-549000" exists ...
	I0308 01:34:06.236127   10208 out.go:298] Setting JSON to false
	I0308 01:34:06.238364   10208 unpause.go:53] namespaces: [kube-system kubernetes-dashboard storage-gluster istio-operator] keys: map[addons:[] all:false apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:8443 auto-pause-interval:1m0s auto-update-drivers:true base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 binary-mirror: bootstrapper:kubeadm cache-images:true cancel-scheduled:false cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:false disable-driver-mounts:false disable-metrics:false disable-optimizations:false disk-size:20000mb dns-domain:cluster.local dns-proxy:false docker-env:[] docker-opt:[] download-only:false driver: dry-run:false embed-certs:false embedcerts:false enable-default-cni:false extra-config: extra-disks:0 feature-gates: force:false force-systemd:false gpus: ha:false host-dns-resolver:true host-only-cidr:192.168.59.1/24 host-
only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:false hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:true interactive:true iso-url:[https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.32.1-1708638130-18020/minikube-v1.32.1-1708638130-18020-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.32.1-1708638130-18020-amd64.iso] keep-context:false keep-context-active:false kubernetes-version: kvm-gpu:false kvm-hidden:false kvm-network:default kvm-numa-count:1 kvm-qemu-uri:qemu:///system listen-address: maxauditentries:1000 memory: mount:false mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:262144 mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube7:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type
:virtio native-ssh:true network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:false no-vtx-check:false nodes:1 output:text ports:[] preload:true profile:pause-549000 purge:false qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:24 rootless:false schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:false socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:22 ssh-user:root static-ip: subnet: trace: user: uuid: vm:false vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:true wantupdatenotification:true wantvirtualboxdriverwarning:true]
	I0308 01:34:06.238426   10208 unpause.go:65] node: {Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:34:06.243369   10208 out.go:177] * Unpausing node pause-549000 ... 
	I0308 01:34:06.245804   10208 host.go:66] Checking if "pause-549000" exists ...
	I0308 01:34:06.259600   10208 ssh_runner.go:195] Run: systemctl --version
	I0308 01:34:06.259600   10208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:34:08.728164   10208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:34:08.728164   10208 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:34:08.728164   10208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
pause_test.go:123: failed to unpause minikube with args: "out/minikube-windows-amd64.exe unpause -p pause-549000 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-549000 -n pause-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-549000 -n pause-549000: exit status 2 (14.1763498s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:34:09.475761   10648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-549000 logs -n 25
E0308 01:34:37.416746    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-549000 logs -n 25: (19.8895388s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                    Args                    |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p auto-503300 sudo cat                    | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:32 UTC | 08 Mar 24 01:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                     | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:32 UTC | 08 Mar 24 01:33 UTC |
	|         | systemctl cat kubelet                      |                |                   |         |                     |                     |
	|         | --no-pager                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo                        | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:32 UTC | 08 Mar 24 01:33 UTC |
	|         | cri-dockerd --version                      |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                     | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | journalctl -xeu kubelet --all              |                |                   |         |                     |                     |
	|         | --full --no-pager                          |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl              | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC |                     |
	|         | status containerd --all --full             |                |                   |         |                     |                     |
	|         | --no-pager                                 |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                 | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | /etc/kubernetes/kubelet.conf               |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 pgrep -a                  | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | kubelet                                    |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl              | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | cat containerd --no-pager                  |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                 | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | /var/lib/kubelet/config.yaml               |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo cat                    | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | /lib/systemd/system/containerd.service     |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                     | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | systemctl status docker --all              |                |                   |         |                     |                     |
	|         | --full --no-pager                          |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo cat                    | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | /etc/containerd/config.toml                |                |                   |         |                     |                     |
	| pause   | -p pause-549000                            | pause-549000   | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | --alsologtostderr -v=5                     |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                     | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | systemctl cat docker                       |                |                   |         |                     |                     |
	|         | --no-pager                                 |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo containerd             | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | config dump                                |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                  | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/nsswitch.conf                         |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                 | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/docker/daemon.json                    |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl              | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | status crio --all --full                   |                |                   |         |                     |                     |
	|         | --no-pager                                 |                |                   |         |                     |                     |
	| unpause | -p pause-549000                            | pause-549000   | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=5                     |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                  | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/hosts                                 |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo docker              | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | system info                                |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl              | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | cat crio --no-pager                        |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                  | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | /etc/resolv.conf                           |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                     | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | systemctl status cri-docker                |                |                   |         |                     |                     |
	|         | --all --full --no-pager                    |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo find                   | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                |                   |         |                     |                     |
	|---------|--------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 01:25:41
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 01:25:41.648436    3724 out.go:291] Setting OutFile to fd 1892 ...
	I0308 01:25:41.649266    3724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:25:41.649365    3724 out.go:304] Setting ErrFile to fd 1800...
	I0308 01:25:41.649365    3724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:25:41.673513    3724 out.go:298] Setting JSON to false
	I0308 01:25:41.676785    3724 start.go:129] hostinfo: {"hostname":"minikube7","uptime":20095,"bootTime":1709841045,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0308 01:25:41.676785    3724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0308 01:25:41.682873    3724 out.go:177] * [pause-549000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0308 01:25:41.685437    3724 notify.go:220] Checking for updates...
	I0308 01:25:41.687564    3724 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:25:41.691498    3724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 01:25:41.694320    3724 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0308 01:25:41.696224    3724 out.go:177]   - MINIKUBE_LOCATION=16214
	I0308 01:25:41.699946    3724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 01:25:38.570183    3532 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:25:38.570183    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:39.580020    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:41.647645    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:41.647645    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:41.647887    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:41.703851    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:25:41.704909    3724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 01:25:46.757943    3724 out.go:177] * Using the hyperv driver based on existing profile
	I0308 01:25:46.761217    3724 start.go:297] selected driver: hyperv
	I0308 01:25:46.761217    3724 start.go:901] validating driver "hyperv" against &{Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:25:46.761785    3724 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 01:25:46.810215    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:25:46.810318    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:25:46.810506    3724 start.go:340] cluster config:
	{Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:25:46.810506    3724 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 01:25:46.814959    3724 out.go:177] * Starting "pause-549000" primary control-plane node in "pause-549000" cluster
	I0308 01:25:44.094507    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:44.094507    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:44.107203    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:46.106633    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:46.113575    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:46.113720    3532 machine.go:94] provisionDockerMachine start ...
	I0308 01:25:46.113865    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:46.818394    3724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:25:46.818634    3724 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0308 01:25:46.818634    3724 cache.go:56] Caching tarball of preloaded images
	I0308 01:25:46.818973    3724 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 01:25:46.819135    3724 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 01:25:46.819403    3724 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\config.json ...
	I0308 01:25:46.821895    3724 start.go:360] acquireMachinesLock for pause-549000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 01:25:48.073857    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:48.081473    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:48.081473    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:50.304089    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:50.304089    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:50.309154    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:50.309799    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:50.309799    3532 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:25:50.429161    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:25:50.429338    3532 buildroot.go:166] provisioning hostname "auto-503300"
	I0308 01:25:50.429416    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:52.317819    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:52.319149    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:52.319149    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:54.570730    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:54.570819    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:54.575957    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:54.576851    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:54.576917    3532 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-503300 && echo "auto-503300" | sudo tee /etc/hostname
	I0308 01:25:54.722371    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-503300
	
	I0308 01:25:54.722477    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:58.884745    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:58.884745    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:58.895116    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:58.895116    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:58.895116    3532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:25:59.033838    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:25:59.033903    3532 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:25:59.033955    3532 buildroot.go:174] setting up certificates
	I0308 01:25:59.034022    3532 provision.go:84] configureAuth start
	I0308 01:25:59.034071    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:00.917365    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:00.929050    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:00.929050    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:03.161786    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:03.172673    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:03.172779    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:05.065907    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:05.065907    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:05.066160    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:07.281486    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:07.281486    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:07.291884    3532 provision.go:143] copyHostCerts
	I0308 01:26:07.292292    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:26:07.292549    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:26:07.293058    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:26:07.294310    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:26:07.294394    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:26:07.294993    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:26:07.296479    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:26:07.296479    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:26:07.296704    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:26:07.297691    3532 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-503300 san=[127.0.0.1 172.20.53.54 auto-503300 localhost minikube]
	I0308 01:26:07.436045    3532 provision.go:177] copyRemoteCerts
	I0308 01:26:07.446321    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:26:07.446321    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:09.325949    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:09.326122    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:09.326204    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:11.589156    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:11.599294    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:11.599733    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:11.701896    3532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2555353s)
	I0308 01:26:11.702135    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:26:11.744150    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0308 01:26:11.784961    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:26:11.834318    3532 provision.go:87] duration metric: took 12.8001083s to configureAuth
	I0308 01:26:11.834385    3532 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:26:11.834385    3532 config.go:182] Loaded profile config "auto-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:26:11.834385    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:13.695320    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:13.695471    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:13.695542    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:15.957966    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:15.972484    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:15.978411    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:15.979137    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:15.979137    3532 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:26:16.100759    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:26:16.100825    3532 buildroot.go:70] root file system type: tmpfs
	I0308 01:26:16.100825    3532 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:26:16.100825    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:17.921771    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:17.921771    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:17.934024    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:20.149980    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:20.159608    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:20.164834    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:20.164834    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:20.165419    3532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:26:20.305681    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:26:20.305681    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:22.179414    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:22.179414    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:22.188959    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:24.435390    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:24.435680    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:24.440190    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:24.440851    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:24.440851    3532 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:26:25.579859    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:26:25.579926    3532 machine.go:97] duration metric: took 39.4658385s to provisionDockerMachine
	I0308 01:26:25.579926    3532 client.go:171] duration metric: took 1m48.1504169s to LocalClient.Create
	I0308 01:26:25.579984    3532 start.go:167] duration metric: took 1m48.1504748s to libmachine.API.Create "auto-503300"
	I0308 01:26:25.579984    3532 start.go:293] postStartSetup for "auto-503300" (driver="hyperv")
	I0308 01:26:25.580033    3532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:26:25.591025    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:26:25.591025    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:27.500771    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:27.500771    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:27.511285    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:29.792928    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:29.792928    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:29.803254    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:29.901407    3532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3102672s)
	I0308 01:26:29.912962    3532 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:26:29.919625    3532 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:26:29.919724    3532 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:26:29.920215    3532 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:26:29.921135    3532 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:26:29.929690    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:26:29.950481    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:26:29.991809    3532 start.go:296] duration metric: took 4.4117844s for postStartSetup
	I0308 01:26:29.994691    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:31.854356    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:31.854356    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:31.864683    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:34.101323    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:34.101323    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:34.101323    3532 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\config.json ...
	I0308 01:26:34.105074    3532 start.go:128] duration metric: took 1m56.6801618s to createHost
	I0308 01:26:34.105074    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:35.997146    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:35.997146    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:35.998272    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:42.551919    4296 start.go:364] duration metric: took 2m39.3659037s to acquireMachinesLock for "kindnet-503300"
	I0308 01:26:42.551919    4296 start.go:93] Provisioning new machine with config: &{Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:26:42.552603    4296 start.go:125] createHost starting for "" (driver="hyperv")
	I0308 01:26:38.220751    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:38.220751    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:38.230426    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:38.230517    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:38.230517    3532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:26:38.348059    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861198.352439443
	
	I0308 01:26:38.348150    3532 fix.go:216] guest clock: 1709861198.352439443
	I0308 01:26:38.348150    3532 fix.go:229] Guest: 2024-03-08 01:26:38.352439443 +0000 UTC Remote: 2024-03-08 01:26:34.1050742 +0000 UTC m=+291.229849901 (delta=4.247365243s)
	I0308 01:26:38.348272    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:42.405108    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:42.415591    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:42.420733    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:42.420733    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:42.420733    3532 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861198
	I0308 01:26:42.551023    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:26:38 UTC 2024
	
	I0308 01:26:42.551570    3532 fix.go:236] clock set: Fri Mar  8 01:26:38 UTC 2024
	 (err=<nil>)
	I0308 01:26:42.551570    3532 start.go:83] releasing machines lock for "auto-503300", held for 2m5.1275299s
	I0308 01:26:42.551889    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:42.556793    4296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0308 01:26:42.557469    4296 start.go:159] libmachine.API.Create for "kindnet-503300" (driver="hyperv")
	I0308 01:26:42.557469    4296 client.go:168] LocalClient.Create starting
	I0308 01:26:42.558595    4296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 01:26:42.558864    4296 main.go:141] libmachine: Decoding PEM data...
	I0308 01:26:42.558864    4296 main.go:141] libmachine: Parsing certificate...
	I0308 01:26:42.559144    4296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 01:26:42.559351    4296 main.go:141] libmachine: Decoding PEM data...
	I0308 01:26:42.559455    4296 main.go:141] libmachine: Parsing certificate...
	I0308 01:26:42.559543    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 01:26:44.344188    4296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 01:26:44.344188    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:44.344288    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 01:26:45.968323    4296 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 01:26:45.968323    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:45.977620    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:26:44.540626    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:44.551551    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:44.551551    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:46.941916    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:46.941916    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:46.959646    3532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:26:46.959758    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:46.971478    3532 ssh_runner.go:195] Run: cat /version.json
	I0308 01:26:46.971478    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:26:50.955311    4296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:26:50.955311    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:50.969252    4296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 01:26:49.101410    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:49.101410    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:49.101675    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:49.104886    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:49.105096    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:49.105096    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:51.591560    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:51.597617    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:51.598236    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:51.643499    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:51.643681    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:51.643935    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:51.789656    3532 ssh_runner.go:235] Completed: cat /version.json: (4.8181335s)
	I0308 01:26:51.789656    3532 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8298536s)
	I0308 01:26:51.809707    3532 ssh_runner.go:195] Run: systemctl --version
	I0308 01:26:51.831674    3532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:26:51.841472    3532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:26:51.854794    3532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:26:51.882894    3532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:26:51.882965    3532 start.go:494] detecting cgroup driver to use...
	I0308 01:26:51.883337    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:26:51.930545    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:26:51.960798    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:26:51.978215    3532 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:26:51.990296    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:26:52.024128    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:26:52.055152    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:26:52.083923    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:26:52.120146    3532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:26:52.152960    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:26:52.186885    3532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:26:52.215923    3532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:26:52.243740    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:52.442704    3532 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:26:52.469884    3532 start.go:494] detecting cgroup driver to use...
	I0308 01:26:52.483638    3532 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:26:52.516887    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:26:52.551980    3532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:26:52.597700    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:26:52.631139    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:26:52.664782    3532 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:26:52.854141    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:26:52.880631    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:26:52.926338    3532 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:26:52.943820    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:26:52.948063    3532 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:26:53.001144    3532 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:26:51.389638    4296 main.go:141] libmachine: Creating SSH key...
	I0308 01:26:51.707284    4296 main.go:141] libmachine: Creating VM...
	I0308 01:26:51.707284    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:26:54.504111    4296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:26:54.504169    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:54.504274    4296 main.go:141] libmachine: Using switch "Default Switch"
	I0308 01:26:54.504274    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:26:56.163411    4296 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:26:56.163411    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:56.168332    4296 main.go:141] libmachine: Creating VHD
	I0308 01:26:56.168332    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 01:26:53.186556    3532 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:26:53.356084    3532 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:26:53.356287    3532 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:26:53.396712    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:53.579319    3532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:26:55.183946    3532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6046124s)
	I0308 01:26:55.198920    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:26:55.239542    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:26:55.273924    3532 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:26:55.458065    3532 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:26:55.637833    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:55.831707    3532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:26:55.870223    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:26:55.903532    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:56.082831    3532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:26:56.188829    3532 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:26:56.200642    3532 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:26:56.210168    3532 start.go:562] Will wait 60s for crictl version
	I0308 01:26:56.221773    3532 ssh_runner.go:195] Run: which crictl
	I0308 01:26:56.238556    3532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:26:56.307942    3532 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:26:56.320124    3532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:26:56.363490    3532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:26:56.393244    3532 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:26:56.393329    3532 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:26:56.400546    3532 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:26:56.400546    3532 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:26:56.405190    3532 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:26:56.415198    3532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:26:56.434162    3532 kubeadm.go:877] updating cluster {Name:auto-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:26:56.434467    3532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:26:56.442940    3532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:26:56.467020    3532 docker.go:685] Got preloaded images: 
	I0308 01:26:56.467083    3532 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:26:56.479517    3532 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:26:56.513373    3532 ssh_runner.go:195] Run: which lz4
	I0308 01:26:56.530824    3532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:26:56.539522    3532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:26:56.539738    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:27:00.311883    4296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A9FD6913-AAF3-4A6E-AF4C-D0C0425612C6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 01:27:00.311883    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:00.311883    4296 main.go:141] libmachine: Writing magic tar header
	I0308 01:27:00.312112    4296 main.go:141] libmachine: Writing SSH key tar header
	I0308 01:27:00.321396    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 01:26:59.368764    3532 docker.go:649] duration metric: took 2.8482129s to copy over tarball
	I0308 01:26:59.380291    3532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:27:03.427555    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:03.427804    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:03.427905    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd' -SizeBytes 20000MB
	I0308 01:27:05.857014    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:05.857014    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:05.868789    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0308 01:27:08.384095    3532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0037203s)
	I0308 01:27:08.384202    3532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:27:08.449832    3532 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:27:08.467051    3532 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:27:08.507707    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:08.681148    3532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:27:12.625747    3532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.944562s)
	I0308 01:27:12.635611    3532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:27:12.661404    3532 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:27:12.661404    3532 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:27:12.661404    3532 kubeadm.go:928] updating node { 172.20.53.54 8443 v1.28.4 docker true true} ...
	I0308 01:27:12.662103    3532 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.53.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 01:27:12.673296    3532 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:27:12.708893    3532 cni.go:84] Creating CNI manager for ""
	I0308 01:27:12.708893    3532 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:27:12.708893    3532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:27:12.708893    3532 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.53.54 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-503300 NodeName:auto-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.53.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.53.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:27:12.708893    3532 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.53.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.53.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.53.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:27:12.721494    3532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:27:12.738504    3532 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:27:12.749986    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:27:12.767775    3532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0308 01:27:12.805442    3532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:27:12.839497    3532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 01:27:12.881599    3532 ssh_runner.go:195] Run: grep 172.20.53.54	control-plane.minikube.internal$ /etc/hosts
	I0308 01:27:12.887340    3532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.53.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:27:12.920951    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:12.378178    4296 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kindnet-503300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 01:27:12.378178    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:12.388950    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kindnet-503300 -DynamicMemoryEnabled $false
	I0308 01:27:14.476607    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:14.480074    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:14.480074    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kindnet-503300 -Count 2
	I0308 01:27:13.098573    3532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:27:13.125907    3532 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300 for IP: 172.20.53.54
	I0308 01:27:13.125940    3532 certs.go:194] generating shared ca certs ...
	I0308 01:27:13.126013    3532 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.126857    3532 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:27:13.126857    3532 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:27:13.127469    3532 certs.go:256] generating profile certs ...
	I0308 01:27:13.127539    3532 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key
	I0308 01:27:13.128243    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt with IP's: []
	I0308 01:27:13.222869    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt ...
	I0308 01:27:13.222869    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt: {Name:mkeb0f2a5bb3f618f1dbc02834bfc5e591282511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.228962    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key ...
	I0308 01:27:13.228962    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key: {Name:mk5338162e9bf0bf00676d94964732c038c1a4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.230050    3532 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204
	I0308 01:27:13.231047    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.53.54]
	I0308 01:27:13.481754    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 ...
	I0308 01:27:13.481754    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204: {Name:mkdd377018daa63db316fc4bfd5fccd0e26c6cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.484387    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204 ...
	I0308 01:27:13.484387    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204: {Name:mk751f3cb046c28b55774fe3f2e77a7914e57f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.485610    3532 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt
	I0308 01:27:13.492333    3532 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key
	I0308 01:27:13.497667    3532 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key
	I0308 01:27:13.497667    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt with IP's: []
	I0308 01:27:13.885026    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt ...
	I0308 01:27:13.885026    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt: {Name:mk07683b3a954eb0e4f56863772cd562f8cd650a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.887752    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key ...
	I0308 01:27:13.887752    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key: {Name:mkf374e7b1feb909e230f0b0cb195580f35df7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.895576    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:27:13.899274    3532 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:27:13.899393    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:27:13.900294    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:27:13.900874    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:27:13.938763    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:27:13.975550    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:27:14.016887    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:27:14.063454    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0308 01:27:14.109515    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:27:14.152202    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:27:14.195549    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 01:27:14.236790    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:27:14.278499    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:27:14.322975    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:27:14.364522    3532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:27:14.408589    3532 ssh_runner.go:195] Run: openssl version
	I0308 01:27:14.427078    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:27:14.455300    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.462002    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.473713    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.493785    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:27:14.525350    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:27:14.553792    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.561176    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.575126    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.596353    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:27:14.632677    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:27:14.669700    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.676507    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.687616    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.708929    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:27:14.739180    3532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:27:14.746304    3532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:27:14.746304    3532 kubeadm.go:391] StartCluster: {Name:auto-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:27:14.757307    3532 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:27:14.790344    3532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:27:14.817904    3532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:27:14.845897    3532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:27:14.863181    3532 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:27:14.863301    3532 kubeadm.go:156] found existing configuration files:
	
	I0308 01:27:14.878686    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:27:14.894806    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:27:14.908495    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:27:14.937231    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:27:14.954020    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:27:14.965712    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:27:14.993719    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:27:15.009400    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:27:15.021116    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:27:15.051778    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:27:15.072742    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:27:15.084149    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:27:15.099405    3532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:27:15.350076    3532 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:27:16.484576    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:16.484576    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:16.493721    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\boot2docker.iso'
	I0308 01:27:18.830681    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:18.837884    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:18.837884    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd'
	I0308 01:27:21.243714    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:21.253736    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:21.253736    4296 main.go:141] libmachine: Starting VM...
	I0308 01:27:21.253736    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kindnet-503300
	I0308 01:27:24.203591    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:24.204067    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:24.204067    4296 main.go:141] libmachine: Waiting for host to start...
	I0308 01:27:24.204171    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:30.422498    3532 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:27:30.422498    3532 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:27:30.422498    3532 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:27:30.423024    3532 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:27:30.423330    3532 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:27:30.423559    3532 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:27:30.427730    3532 out.go:204]   - Generating certificates and keys ...
	I0308 01:27:30.428392    3532 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:27:30.428572    3532 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-503300 localhost] and IPs [172.20.53.54 127.0.0.1 ::1]
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-503300 localhost] and IPs [172.20.53.54 127.0.0.1 ::1]
	I0308 01:27:30.430440    3532 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:27:30.431095    3532 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:27:30.431307    3532 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:27:30.431612    3532 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:27:30.431731    3532 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:27:30.431731    3532 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:27:30.432614    3532 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:27:30.435537    3532 out.go:204]   - Booting up control plane ...
	I0308 01:27:30.435537    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:27:30.435537    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:27:30.436084    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:27:30.436910    3532 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:27:30.436910    3532 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.004135 seconds
	I0308 01:27:30.437699    3532 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:27:30.437992    3532 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:27:30.437992    3532 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:27:30.438654    3532 kubeadm.go:309] [mark-control-plane] Marking the node auto-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:27:30.438654    3532 kubeadm.go:309] [bootstrap-token] Using token: jux1em.0cf7kc2zweaoxk1n
	I0308 01:27:30.441564    3532 out.go:204]   - Configuring RBAC rules ...
	I0308 01:27:30.442370    3532 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:27:30.442370    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:27:30.442913    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:27:30.444221    3532 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:27:30.444418    3532 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:27:30.444680    3532 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:27:30.444680    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:27:30.444959    3532 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:27:30.444959    3532 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.446045    3532 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:27:30.446165    3532 kubeadm.go:309] 
	I0308 01:27:30.446335    3532 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:27:30.446611    3532 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:27:30.446871    3532 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:27:30.446871    3532 kubeadm.go:309] 
	I0308 01:27:30.446871    3532 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:27:30.446871    3532 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:27:30.446871    3532 kubeadm.go:309] 
	I0308 01:27:30.446871    3532 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jux1em.0cf7kc2zweaoxk1n \
	I0308 01:27:30.446871    3532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:27:30.448049    3532 kubeadm.go:309] 	--control-plane 
	I0308 01:27:30.448049    3532 kubeadm.go:309] 
	I0308 01:27:30.448429    3532 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:27:30.448492    3532 kubeadm.go:309] 
	I0308 01:27:30.448874    3532 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jux1em.0cf7kc2zweaoxk1n \
	I0308 01:27:30.449441    3532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:27:30.449552    3532 cni.go:84] Creating CNI manager for ""
	I0308 01:27:30.449616    3532 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:27:30.454403    3532 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 01:27:26.351354    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:26.361890    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:26.362007    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:28.778270    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:28.778354    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:29.780011    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:30.471227    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 01:27:30.496842    3532 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 01:27:30.545705    3532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:27:30.559124    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:30.561705    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-503300 minikube.k8s.io/updated_at=2024_03_08T01_27_30_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=auto-503300 minikube.k8s.io/primary=true
	I0308 01:27:30.592407    3532 ops.go:34] apiserver oom_adj: -16
	I0308 01:27:30.964414    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.479176    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.969034    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:32.472158    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:32.977294    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:34.261075    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:34.261075    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:35.270312    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:33.475854    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:33.972280    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:34.465540    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:34.977719    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:35.475118    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:35.973158    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:36.468510    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:36.978878    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.475791    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.970083    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.336225    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:37.336225    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:37.336810    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:39.721931    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:39.721931    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:40.727603    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:38.474575    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:38.964719    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:39.477020    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:39.971315    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:40.464227    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:40.973582    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:41.471265    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:41.973781    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:42.479283    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:42.975951    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:43.111543    3532 kubeadm.go:1106] duration metric: took 12.5657197s to wait for elevateKubeSystemPrivileges
	W0308 01:27:43.111543    3532 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:27:43.111543    3532 kubeadm.go:393] duration metric: took 28.3649725s to StartCluster
	I0308 01:27:43.111543    3532 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:43.111543    3532 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:27:43.114116    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:43.115286    3532 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:27:43.115286    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:27:43.121372    3532 out.go:177] * Verifying Kubernetes components...
	I0308 01:27:43.115914    3532 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:27:43.121372    3532 addons.go:69] Setting storage-provisioner=true in profile "auto-503300"
	I0308 01:27:43.117750    3532 config.go:182] Loaded profile config "auto-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:27:43.125082    3532 addons.go:234] Setting addon storage-provisioner=true in "auto-503300"
	I0308 01:27:43.121372    3532 addons.go:69] Setting default-storageclass=true in profile "auto-503300"
	I0308 01:27:43.125082    3532 host.go:66] Checking if "auto-503300" exists ...
	I0308 01:27:43.125082    3532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-503300"
	I0308 01:27:43.126139    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:43.127852    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:43.141961    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:43.513829    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:27:43.525499    3532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:27:45.518019    3532 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9924561s)
	I0308 01:27:45.518133    3532 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.0042429s)
	I0308 01:27:45.518279    3532 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:27:45.523362    3532 node_ready.go:35] waiting up to 15m0s for node "auto-503300" to be "Ready" ...
	I0308 01:27:45.564956    3532 node_ready.go:49] node "auto-503300" has status "Ready":"True"
	I0308 01:27:45.565051    3532 node_ready.go:38] duration metric: took 41.6886ms for node "auto-503300" to be "Ready" ...
	I0308 01:27:45.565126    3532 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:27:45.594681    3532 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace to be "Ready" ...
	I0308 01:27:45.622719    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:45.628028    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.631172    3532 addons.go:234] Setting addon default-storageclass=true in "auto-503300"
	I0308 01:27:45.631802    3532 host.go:66] Checking if "auto-503300" exists ...
	I0308 01:27:45.632843    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:45.732738    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:45.737652    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.740578    3532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:45.757539    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:45.761419    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.742765    3532 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:27:45.742765    3532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:27:45.743471    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:46.031742    3532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-503300" context rescaled to 1 replicas
	I0308 01:27:47.618270    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:47.846952    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:47.862309    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:47.862397    3532 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:27:47.862397    3532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:27:47.862397    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:48.007901    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:48.007901    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:48.012351    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:46.768629    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:49.112246    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:49.112728    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:49.112815    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:49.622746    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:50.083298    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:50.088206    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:50.088287    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:50.673550    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:27:50.673550    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:50.673995    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:27:50.818232    3532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:27:52.126626    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:52.609267    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:27:52.609267    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:52.623979    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:27:52.768959    3532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:27:53.004052    3532 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:27:53.006594    3532 addons.go:505] duration metric: took 9.8905869s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:27:51.658272    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:27:51.658336    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:51.658399    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:53.693043    4296 machine.go:94] provisionDockerMachine start ...
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:55.640974    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:55.651454    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:55.651454    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:54.614297    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:57.113671    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:58.007814    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:27:58.007814    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:58.012456    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:27:58.013303    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:27:58.013303    4296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:27:58.138761    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:27:58.138877    4296 buildroot.go:166] provisioning hostname "kindnet-503300"
	I0308 01:27:58.138877    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:00.053473    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:00.053473    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:00.064119    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:59.617462    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:02.112220    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:02.394797    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:02.394797    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:02.399516    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:02.400087    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:02.400151    4296 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-503300 && echo "kindnet-503300" | sudo tee /etc/hostname
	I0308 01:28:02.554465    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-503300
	
	I0308 01:28:02.554465    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:04.613956    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:06.623558    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:06.805763    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:06.805763    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:06.811744    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:06.811861    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:06.811861    4296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:28:06.957977    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:28:06.957977    4296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:28:06.957977    4296 buildroot.go:174] setting up certificates
	I0308 01:28:06.957977    4296 provision.go:84] configureAuth start
	I0308 01:28:06.957977    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:11.198880    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:11.208684    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:11.208684    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:08.624782    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:11.112685    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:13.165613    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:13.165613    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:13.165701    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:15.438708    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:15.450339    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:15.450339    4296 provision.go:143] copyHostCerts
	I0308 01:28:15.450525    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:28:15.450525    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:28:15.451158    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:28:15.452037    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:28:15.452037    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:28:15.452908    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:28:15.454021    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:28:15.454021    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:28:15.454021    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:28:15.455541    4296 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-503300 san=[127.0.0.1 172.20.59.53 kindnet-503300 localhost minikube]
	I0308 01:28:15.660535    4296 provision.go:177] copyRemoteCerts
	I0308 01:28:15.676008    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:28:15.676008    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:13.605668    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:15.616801    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:17.619864    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:17.594577    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:17.605242    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:17.605242    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:19.930601    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:19.930601    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:19.931485    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:28:20.039575    4296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3635266s)
	I0308 01:28:20.040344    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 01:28:20.110834    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:28:20.171515    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0308 01:28:20.216759    4296 provision.go:87] duration metric: took 13.2586572s to configureAuth
	I0308 01:28:20.216828    4296 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:28:20.217344    4296 config.go:182] Loaded profile config "kindnet-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:28:20.217457    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:20.118438    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:22.607726    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:24.113267    3532 pod_ready.go:92] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.113321    3532 pod_ready.go:81] duration metric: took 38.5182775s for pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.113384    3532 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.118920    3532 pod_ready.go:97] error getting pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-rbjnx" not found
	I0308 01:28:24.118975    3532 pod_ready.go:81] duration metric: took 5.5907ms for pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace to be "Ready" ...
	E0308 01:28:24.119053    3532 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-rbjnx" not found
	I0308 01:28:24.119053    3532 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.128592    3532 pod_ready.go:92] pod "etcd-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.128642    3532 pod_ready.go:81] duration metric: took 9.5179ms for pod "etcd-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.128642    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.136071    3532 pod_ready.go:92] pod "kube-apiserver-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.136071    3532 pod_ready.go:81] duration metric: took 7.4291ms for pod "kube-apiserver-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.136071    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.144168    3532 pod_ready.go:92] pod "kube-controller-manager-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.144168    3532 pod_ready.go:81] duration metric: took 8.0967ms for pod "kube-controller-manager-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.144168    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-pstch" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.321007    3532 pod_ready.go:92] pod "kube-proxy-pstch" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.321109    3532 pod_ready.go:81] duration metric: took 176.9397ms for pod "kube-proxy-pstch" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.321109    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.717556    3532 pod_ready.go:92] pod "kube-scheduler-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.717556    3532 pod_ready.go:81] duration metric: took 396.4427ms for pod "kube-scheduler-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.717665    3532 pod_ready.go:38] duration metric: took 39.152171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:28:24.717735    3532 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:28:24.730790    3532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:28:24.761106    3532 api_server.go:72] duration metric: took 41.6452137s to wait for apiserver process to appear ...
	I0308 01:28:24.761106    3532 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:28:24.761106    3532 api_server.go:253] Checking apiserver healthz at https://172.20.53.54:8443/healthz ...
	I0308 01:28:24.768097    3532 api_server.go:279] https://172.20.53.54:8443/healthz returned 200:
	ok
	I0308 01:28:24.771635    3532 api_server.go:141] control plane version: v1.28.4
	I0308 01:28:24.771635    3532 api_server.go:131] duration metric: took 10.5287ms to wait for apiserver health ...
	I0308 01:28:24.771635    3532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:28:24.925203    3532 system_pods.go:59] 7 kube-system pods found
	I0308 01:28:24.925203    3532 system_pods.go:61] "coredns-5dd5756b68-phwrk" [f65c338e-f008-4ba8-ae07-263660851a7b] Running
	I0308 01:28:24.925728    3532 system_pods.go:61] "etcd-auto-503300" [1e75ae98-597e-4bb5-ab7b-b1a55acab24c] Running
	I0308 01:28:24.925728    3532 system_pods.go:61] "kube-apiserver-auto-503300" [e2df06cf-c573-457d-ac05-b0bc9c100ce7] Running
	I0308 01:28:24.926000    3532 system_pods.go:61] "kube-controller-manager-auto-503300" [384c0def-5b56-4d81-b8e1-5c22cfcfc666] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "kube-proxy-pstch" [b412098b-b79d-4940-af7d-3913d618242c] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "kube-scheduler-auto-503300" [941e75d4-af17-4bcd-9ae7-dd4e0e281fe7] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "storage-provisioner" [a9fcf94b-478e-496a-a649-bf2310768283] Running
	I0308 01:28:24.926066    3532 system_pods.go:74] duration metric: took 154.4295ms to wait for pod list to return data ...
	I0308 01:28:24.926066    3532 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:28:25.120077    3532 default_sa.go:45] found service account: "default"
	I0308 01:28:25.120235    3532 default_sa.go:55] duration metric: took 194.1667ms for default service account to be created ...
	I0308 01:28:25.120235    3532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:28:25.318119    3532 system_pods.go:86] 7 kube-system pods found
	I0308 01:28:25.318119    3532 system_pods.go:89] "coredns-5dd5756b68-phwrk" [f65c338e-f008-4ba8-ae07-263660851a7b] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "etcd-auto-503300" [1e75ae98-597e-4bb5-ab7b-b1a55acab24c] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-apiserver-auto-503300" [e2df06cf-c573-457d-ac05-b0bc9c100ce7] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-controller-manager-auto-503300" [384c0def-5b56-4d81-b8e1-5c22cfcfc666] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-proxy-pstch" [b412098b-b79d-4940-af7d-3913d618242c] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-scheduler-auto-503300" [941e75d4-af17-4bcd-9ae7-dd4e0e281fe7] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "storage-provisioner" [a9fcf94b-478e-496a-a649-bf2310768283] Running
	I0308 01:28:25.318119    3532 system_pods.go:126] duration metric: took 197.8822ms to wait for k8s-apps to be running ...
	I0308 01:28:25.318119    3532 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:28:25.334041    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:28:25.358538    3532 system_svc.go:56] duration metric: took 40.4191ms WaitForService to wait for kubelet
	I0308 01:28:25.358538    3532 kubeadm.go:576] duration metric: took 42.2428548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:28:25.358538    3532 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:28:25.522748    3532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:28:25.522748    3532 node_conditions.go:123] node cpu capacity is 2
	I0308 01:28:25.522748    3532 node_conditions.go:105] duration metric: took 164.2077ms to run NodePressure ...
	I0308 01:28:25.522748    3532 start.go:240] waiting for startup goroutines ...
	I0308 01:28:25.522748    3532 start.go:245] waiting for cluster config update ...
	I0308 01:28:25.522748    3532 start.go:254] writing updated cluster config ...
	I0308 01:28:25.535883    3532 ssh_runner.go:195] Run: rm -f paused
	I0308 01:28:25.672414    3532 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:28:25.676771    3532 out.go:177] * Done! kubectl is now configured to use "auto-503300" cluster and "default" namespace by default
	I0308 01:28:22.126656    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:22.138978    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:22.139070    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:24.455711    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:24.455711    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:24.464696    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:24.465103    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:24.465103    4296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:28:24.599825    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:28:24.599825    4296 buildroot.go:70] root file system type: tmpfs
	I0308 01:28:24.600424    4296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:28:24.600553    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:26.612241    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:26.612241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:26.612490    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:29.079624    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:29.079624    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:29.088721    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:29.089257    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:29.089414    4296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:28:29.251092    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:28:29.251092    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:33.656031    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:33.656031    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:33.671205    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:33.671840    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:33.671896    4296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:28:34.797424    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:28:34.797424    4296 machine.go:97] duration metric: took 41.1039953s to provisionDockerMachine
	I0308 01:28:34.797424    4296 client.go:171] duration metric: took 1m52.2389028s to LocalClient.Create
	I0308 01:28:34.797424    4296 start.go:167] duration metric: took 1m52.2389028s to libmachine.API.Create "kindnet-503300"
	I0308 01:28:34.797424    4296 start.go:293] postStartSetup for "kindnet-503300" (driver="hyperv")
	I0308 01:28:34.797424    4296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:28:34.810079    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:28:34.810079    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:36.852532    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:36.852532    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:36.862855    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:39.285308    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:39.285360    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:39.285782    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:28:39.392996    4296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5828732s)
	I0308 01:28:39.405336    4296 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:28:39.412176    4296 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:28:39.412287    4296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:28:39.412869    4296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:28:39.414199    4296 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:28:39.424125    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:28:39.446277    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:28:39.488095    4296 start.go:296] duration metric: took 4.6906263s for postStartSetup
	I0308 01:28:39.489850    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:41.519810    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:41.530936    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:41.531034    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:43.909931    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:43.909931    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:43.921285    4296 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\config.json ...
	I0308 01:28:43.924289    4296 start.go:128] duration metric: took 2m1.3705466s to createHost
	I0308 01:28:43.924419    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:48.179331    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:48.179331    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:48.183797    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:48.184620    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:48.184620    4296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:28:48.316135    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861328.318568458
	
	I0308 01:28:48.316135    4296 fix.go:216] guest clock: 1709861328.318568458
	I0308 01:28:48.316135    4296 fix.go:229] Guest: 2024-03-08 01:28:48.318568458 +0000 UTC Remote: 2024-03-08 01:28:43.9242891 +0000 UTC m=+287.842007501 (delta=4.394279358s)
	I0308 01:28:48.316135    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:50.238448    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:50.242936    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:50.242936    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:52.847972   14284 start.go:364] duration metric: took 4m42.5171573s to acquireMachinesLock for "calico-503300"
	I0308 01:28:52.848148   14284 start.go:93] Provisioning new machine with config: &{Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:28:52.848703   14284 start.go:125] createHost starting for "" (driver="hyperv")
	I0308 01:28:52.855026   14284 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0308 01:28:52.855464   14284 start.go:159] libmachine.API.Create for "calico-503300" (driver="hyperv")
	I0308 01:28:52.855464   14284 client.go:168] LocalClient.Create starting
	I0308 01:28:52.856125   14284 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 01:28:52.856125   14284 main.go:141] libmachine: Decoding PEM data...
	I0308 01:28:52.856799   14284 main.go:141] libmachine: Parsing certificate...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Decoding PEM data...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Parsing certificate...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 01:28:54.706282   14284 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 01:28:54.706350   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:54.706415   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 01:28:52.682059    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:52.682059    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:52.698526    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:52.699103    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:52.699103    4296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861328
	I0308 01:28:52.847498    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:28:48 UTC 2024
	
	I0308 01:28:52.847498    4296 fix.go:236] clock set: Fri Mar  8 01:28:48 UTC 2024
	 (err=<nil>)
	I0308 01:28:52.847498    4296 start.go:83] releasing machines lock for "kindnet-503300", held for 2m10.2943561s
	I0308 01:28:52.847845    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:54.876884    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:54.887303    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:54.887381    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:56.367257   14284 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 01:28:56.374727   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:56.374727   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:28:57.337535    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:57.338403    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:57.341909    4296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:28:57.341967    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:57.358821    4296 ssh_runner.go:195] Run: cat /version.json
	I0308 01:28:57.358821    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:59.726995    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:59.726995    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:59.738242    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:59.738408    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:59.738408    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:59.738632    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:01.817577   14284 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:29:01.817577   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:01.820721   14284 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 01:29:02.310723   14284 main.go:141] libmachine: Creating SSH key...
	I0308 01:29:02.505648   14284 main.go:141] libmachine: Creating VM...
	I0308 01:29:02.505648   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:29:02.517427    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:29:02.517495    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:02.517495    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:29:02.581115    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:29:02.581115    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:02.581871    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:29:02.704796    4296 ssh_runner.go:235] Completed: cat /version.json: (5.3459257s)
	I0308 01:29:02.704897    4296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3628372s)
	I0308 01:29:02.715802    4296 ssh_runner.go:195] Run: systemctl --version
	I0308 01:29:02.738006    4296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:29:02.745609    4296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:29:02.755378    4296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:29:02.784503    4296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:29:02.784503    4296 start.go:494] detecting cgroup driver to use...
	I0308 01:29:02.784503    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:29:02.828784    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:29:02.858572    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:29:02.876733    4296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:29:02.888844    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:29:02.925656    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:29:02.975010    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:29:03.012592    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:29:03.047689    4296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:29:03.079703    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:29:03.121888    4296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:29:03.152031    4296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:29:03.179924    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:03.393493    4296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:29:03.424130    4296 start.go:494] detecting cgroup driver to use...
	I0308 01:29:03.436874    4296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:29:03.477114    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:29:03.513082    4296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:29:03.563020    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:29:03.601435    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:29:03.637613    4296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:29:03.856671    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:29:03.879459    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:29:03.923328    4296 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:29:03.939522    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:29:03.956398    4296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:29:03.996726    4296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:29:04.204935    4296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:29:04.408789    4296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:29:04.408996    4296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:29:04.453363    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:04.650247    4296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:29:06.219372    4296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5691103s)
	I0308 01:29:06.231533    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:29:06.264324    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:29:06.301558    4296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:29:06.507331    4296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:29:06.695957    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:06.892725    4296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:29:06.932604    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:29:06.965421    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:07.177045    4296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:29:07.275626    4296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:29:07.297656    4296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:29:07.318428    4296 start.go:562] Will wait 60s for crictl version
	I0308 01:29:07.332698    4296 ssh_runner.go:195] Run: which crictl
	I0308 01:29:07.351395    4296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:29:07.421295    4296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:29:07.433169    4296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:29:07.481168    4296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:29:05.498776   14284 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:29:05.509686   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:05.509733   14284 main.go:141] libmachine: Using switch "Default Switch"
	I0308 01:29:05.509733   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:29:07.288320   14284 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:29:07.288401   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:07.288533   14284 main.go:141] libmachine: Creating VHD
	I0308 01:29:07.288584   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 01:29:07.515039    4296 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:29:07.515039    4296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:29:07.523963    4296 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:29:07.524043    4296 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:29:07.534565    4296 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:29:07.540976    4296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:29:07.562068    4296 kubeadm.go:877] updating cluster {Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:29:07.562629    4296 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:29:07.570935    4296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:29:07.597464    4296 docker.go:685] Got preloaded images: 
	I0308 01:29:07.597464    4296 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:29:07.608871    4296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:29:07.639344    4296 ssh_runner.go:195] Run: which lz4
	I0308 01:29:07.656448    4296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:29:07.666711    4296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:29:07.667047    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:29:10.128643    4296 docker.go:649] duration metric: took 2.483628s to copy over tarball
	I0308 01:29:10.141105    4296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:29:11.443487   14284 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0194326F-3E01-4A40-86E0-D3138E67F54E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 01:29:11.443586   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:11.443677   14284 main.go:141] libmachine: Writing magic tar header
	I0308 01:29:11.443770   14284 main.go:141] libmachine: Writing SSH key tar header
	I0308 01:29:11.456176   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd' -SizeBytes 20000MB
	I0308 01:29:17.452263   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:17.463255   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:17.463344   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0308 01:29:19.237034    4296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0956412s)
	I0308 01:29:19.237034    4296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:29:19.321667    4296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:29:19.341678    4296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:29:19.389615    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:19.610569    4296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:29:23.596370   14284 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	calico-503300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 01:29:23.607494   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:23.607614   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName calico-503300 -DynamicMemoryEnabled $false
	I0308 01:29:23.446616    4296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.8359364s)
	I0308 01:29:23.457539    4296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:29:23.485281    4296 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:29:23.485281    4296 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:29:23.485281    4296 kubeadm.go:928] updating node { 172.20.59.53 8443 v1.28.4 docker true true} ...
	I0308 01:29:23.485848    4296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0308 01:29:23.497187    4296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:29:23.531386    4296 cni.go:84] Creating CNI manager for "kindnet"
	I0308 01:29:23.531386    4296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:29:23.531386    4296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.53 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-503300 NodeName:kindnet-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:29:23.531918    4296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.59.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:29:23.543438    4296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:29:23.561128    4296 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:29:23.573283    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:29:23.593564    4296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0308 01:29:23.627397    4296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:29:23.656318    4296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0308 01:29:23.704450    4296 ssh_runner.go:195] Run: grep 172.20.59.53	control-plane.minikube.internal$ /etc/hosts
	I0308 01:29:23.710637    4296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:29:23.747766    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:23.947467    4296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:29:23.977331    4296 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300 for IP: 172.20.59.53
	I0308 01:29:23.977397    4296 certs.go:194] generating shared ca certs ...
	I0308 01:29:23.977397    4296 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:23.977936    4296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:29:23.978004    4296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:29:23.978004    4296 certs.go:256] generating profile certs ...
	I0308 01:29:23.979950    4296 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key
	I0308 01:29:23.980054    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt with IP's: []
	I0308 01:29:24.668337    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt ...
	I0308 01:29:24.668337    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt: {Name:mk8f465e51edeb407eb33cac94211a7e4a114757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.678831    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key ...
	I0308 01:29:24.678831    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key: {Name:mk58a3b04c69666836f729848c9655d649721fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.680279    4296 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3
	I0308 01:29:24.680279    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.53]
	I0308 01:29:24.988426    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 ...
	I0308 01:29:24.988426    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3: {Name:mk95a2638b13f7ddc8d5da186f034c40eec335c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.996943    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3 ...
	I0308 01:29:24.996943    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3: {Name:mke8fbabdecb4785ede3c8aaad268f9abab5b5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.998570    4296 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt
	I0308 01:29:24.998914    4296 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key
	I0308 01:29:25.009100    4296 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key
	I0308 01:29:25.009892    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt with IP's: []
	I0308 01:29:25.081249    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt ...
	I0308 01:29:25.081249    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt: {Name:mk80dc2234368544f4797a59ca64fefb459352cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:25.091406    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key ...
	I0308 01:29:25.091406    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key: {Name:mkd2c1f44a1d4baf197686f0dcb458f5bf6bbd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:25.101906    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:29:25.104228    4296 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:29:25.104543    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:29:25.104543    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:29:25.105233    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:29:25.105694    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:29:25.106089    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:29:25.106714    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:29:25.162950    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:29:25.208020    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:29:25.251784    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:29:25.298451    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:29:25.345109    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:29:25.399616    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:29:25.449923    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 01:29:25.500462    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:29:25.540109    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:29:25.579255    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:29:25.629559    4296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:29:25.670536    4296 ssh_runner.go:195] Run: openssl version
	I0308 01:29:25.688717    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:29:25.718652    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.726137    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.739775    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.759024    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:29:25.793766    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:29:25.824024    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.833565    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.845940    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.866936    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:29:25.896181    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:29:25.924715    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.931773    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.948285    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.975211    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:29:26.013282    4296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:29:26.021615    4296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:29:26.021615    4296 kubeadm.go:391] StartCluster: {Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:29:26.032056    4296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:29:26.074338    4296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:29:26.107788    4296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:29:26.144676    4296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:29:26.162092    4296 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:29:26.162092    4296 kubeadm.go:156] found existing configuration files:
	
	I0308 01:29:26.174546    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:29:26.191168    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:29:26.204032    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:29:26.235037    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:29:26.251753    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:29:26.264158    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:29:26.299752    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:29:26.322295    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:29:26.339118    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:29:26.371065    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:29:26.390208    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:29:26.403906    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:29:26.421437    4296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:29:26.491372    4296 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:29:26.491372    4296 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:29:26.686250    4296 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:29:26.686347    4296 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:29:26.686347    4296 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:29:27.096139    4296 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:29:25.847509   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:25.852449   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:25.852633   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor calico-503300 -Count 2
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\boot2docker.iso'
	I0308 01:29:27.107309    4296 out.go:204]   - Generating certificates and keys ...
	I0308 01:29:27.109744    4296 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:29:27.109982    4296 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:29:27.313995    4296 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:29:27.395081    4296 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:29:27.929680    4296 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:29:28.330719    4296 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:29:28.591316    4296 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:29:28.591377    4296 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-503300 localhost] and IPs [172.20.59.53 127.0.0.1 ::1]
	I0308 01:29:29.035541    4296 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:29:29.036134    4296 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-503300 localhost] and IPs [172.20.59.53 127.0.0.1 ::1]
	I0308 01:29:29.241840    4296 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:29:29.381770    4296 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:29:29.534867    4296 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:29:29.534867    4296 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:29:29.795063    4296 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:29:30.078442    4296 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:29:30.511356    4296 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:29:31.171264    4296 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:29:31.175285    4296 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:29:31.180490    4296 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:29:31.183259    4296 out.go:204]   - Booting up control plane ...
	I0308 01:29:31.183259    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:29:31.183925    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:29:31.186299    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:29:31.218708    4296 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:29:31.220755    4296 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:29:31.220755    4296 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:29:30.629964   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:30.629964   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:30.630251   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd'
	I0308 01:29:33.294388   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:33.294455   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:33.294455   14284 main.go:141] libmachine: Starting VM...
	I0308 01:29:33.294455   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM calico-503300
	I0308 01:29:31.419492    4296 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:29:36.368100   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:36.368180   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:36.368261   14284 main.go:141] libmachine: Waiting for host to start...
	I0308 01:29:36.368261   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:38.735807   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:38.735885   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:38.736011   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:39.921440    4296 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.504608 seconds
	I0308 01:29:39.921440    4296 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:29:39.960928    4296 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:29:40.532501    4296 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:29:40.533394    4296 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:29:41.052261    4296 kubeadm.go:309] [bootstrap-token] Using token: 4fmjec.4wkw7d5f8hy8oofx
	I0308 01:29:41.054824    4296 out.go:204]   - Configuring RBAC rules ...
	I0308 01:29:41.055427    4296 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:29:41.066772    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:29:41.081384    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:29:41.105840    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:29:41.115482    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:29:41.122916    4296 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:29:41.151732    4296 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:29:41.537117    4296 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:29:41.587087    4296 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:29:41.587844    4296 kubeadm.go:309] 
	I0308 01:29:41.588907    4296 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:29:41.588969    4296 kubeadm.go:309] 
	I0308 01:29:41.589145    4296 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:29:41.589145    4296 kubeadm.go:309] 
	I0308 01:29:41.589145    4296 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:29:41.589145    4296 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:29:41.589943    4296 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:29:41.589995    4296 kubeadm.go:309] 
	I0308 01:29:41.590330    4296 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:29:41.590434    4296 kubeadm.go:309] 
	I0308 01:29:41.590662    4296 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:29:41.590662    4296 kubeadm.go:309] 
	I0308 01:29:41.590893    4296 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:29:41.591103    4296 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:29:41.591824    4296 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:29:41.591936    4296 kubeadm.go:309] 
	I0308 01:29:41.592383    4296 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:29:41.592641    4296 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:29:41.592703    4296 kubeadm.go:309] 
	I0308 01:29:41.593063    4296 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4fmjec.4wkw7d5f8hy8oofx \
	I0308 01:29:41.593063    4296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:29:41.593063    4296 kubeadm.go:309] 	--control-plane 
	I0308 01:29:41.593063    4296 kubeadm.go:309] 
	I0308 01:29:41.593063    4296 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:29:41.593063    4296 kubeadm.go:309] 
	I0308 01:29:41.594433    4296 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4fmjec.4wkw7d5f8hy8oofx \
	I0308 01:29:41.594433    4296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:29:41.595013    4296 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:29:41.595104    4296 cni.go:84] Creating CNI manager for "kindnet"
	I0308 01:29:41.598409    4296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 01:29:41.375904   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:41.375904   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:42.387224   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:44.574220   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:44.574353   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:44.574407   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:41.611149    4296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 01:29:41.613072    4296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 01:29:41.613072    4296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 01:29:41.672362    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 01:29:43.134826    4296 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4623368s)
	I0308 01:29:43.134912    4296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:29:43.149990    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:43.157655    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-503300 minikube.k8s.io/updated_at=2024_03_08T01_29_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=kindnet-503300 minikube.k8s.io/primary=true
	I0308 01:29:43.174950    4296 ops.go:34] apiserver oom_adj: -16
	I0308 01:29:43.341776    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:43.851833    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:44.352063    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:44.853198    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:45.365708    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:45.856545    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.015022   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:47.015022   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:48.017286   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:50.342571   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:50.342646   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:50.342677   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:46.352934    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:46.852122    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.349293    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.853230    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:48.350961    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:48.852196    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:49.362921    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:49.852530    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:50.359546    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:50.856190    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:51.344753    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:51.862615    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:52.351321    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:52.845661    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:53.362194    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:53.853843    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:54.074984    4296 kubeadm.go:1106] duration metric: took 10.9398512s to wait for elevateKubeSystemPrivileges
	W0308 01:29:54.075133    4296 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:29:54.075133    4296 kubeadm.go:393] duration metric: took 28.053254s to StartCluster
	I0308 01:29:54.075219    4296 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:54.075344    4296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:29:54.078651    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:54.079632    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:29:54.080187    4296 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:29:54.080187    4296 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:29:54.080329    4296 addons.go:69] Setting storage-provisioner=true in profile "kindnet-503300"
	I0308 01:29:54.080427    4296 addons.go:69] Setting default-storageclass=true in profile "kindnet-503300"
	I0308 01:29:54.080427    4296 addons.go:234] Setting addon storage-provisioner=true in "kindnet-503300"
	I0308 01:29:54.080427    4296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-503300"
	I0308 01:29:54.086238    4296 out.go:177] * Verifying Kubernetes components...
	I0308 01:29:52.944678   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:52.944678   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:53.952980   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:54.080427    4296 host.go:66] Checking if "kindnet-503300" exists ...
	I0308 01:29:54.080931    4296 config.go:182] Loaded profile config "kindnet-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:29:54.081795    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:54.091038    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:54.116771    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:54.674366    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:29:54.818016    4296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:29:55.609416    4296 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:29:55.614747    4296 node_ready.go:35] waiting up to 15m0s for node "kindnet-503300" to be "Ready" ...
	I0308 01:29:56.133476    4296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-503300" context rescaled to 1 replicas
	I0308 01:29:56.920977    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.920977    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.921525    4296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:29:56.757873   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.760327   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.760500   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:59.916030   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:59.916085   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.927479    4296 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:29:56.927479    4296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:29:56.927479    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:56.950254    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.950306    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.953834    4296 addons.go:234] Setting addon default-storageclass=true in "kindnet-503300"
	I0308 01:29:56.953978    4296 host.go:66] Checking if "kindnet-503300" exists ...
	I0308 01:29:56.955352    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:57.634505    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:29:59.652306    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:59.652306    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:59.654763    4296 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:29:59.654859    4296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:29:59.654952    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:59.751391    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:59.751456    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:59.751456    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:00.123532    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:00.935874   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:02.133778    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:02.280051    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:02.280535    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:02.280796    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:02.723194    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:30:02.723241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:02.723881    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:30:02.908686    4296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:30:04.264217    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:04.437409    4296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.5271645s)
	I0308 01:30:05.423390    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:30:05.423390    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:05.423950    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:30:05.573626    4296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:30:05.923243    4296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:30:05.925466    4296 addons.go:505] duration metric: took 11.8451669s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:30:06.453762   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:06.453762   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:06.460911   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:08.599201   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:08.599426   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:08.599426   14284 machine.go:94] provisionDockerMachine start ...
	I0308 01:30:08.599505   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:06.630161    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:07.628160    4296 node_ready.go:49] node "kindnet-503300" has status "Ready":"True"
	I0308 01:30:07.628160    4296 node_ready.go:38] duration metric: took 12.0131721s for node "kindnet-503300" to be "Ready" ...
	I0308 01:30:07.628160    4296 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:30:07.640804    4296 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.658628    4296 pod_ready.go:92] pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.658628    4296 pod_ready.go:81] duration metric: took 2.0178052s for pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.658628    4296 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.667095    4296 pod_ready.go:92] pod "etcd-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.667095    4296 pod_ready.go:81] duration metric: took 8.4673ms for pod "etcd-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.667192    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.676193    4296 pod_ready.go:92] pod "kube-apiserver-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.676193    4296 pod_ready.go:81] duration metric: took 9.0008ms for pod "kube-apiserver-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.676193    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.685335    4296 pod_ready.go:92] pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.685422    4296 pod_ready.go:81] duration metric: took 9.2289ms for pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.685422    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-gzd7d" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.693070    4296 pod_ready.go:92] pod "kube-proxy-gzd7d" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.693070    4296 pod_ready.go:81] duration metric: took 7.5277ms for pod "kube-proxy-gzd7d" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.693070    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:10.056528    4296 pod_ready.go:92] pod "kube-scheduler-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:10.056650    4296 pod_ready.go:81] duration metric: took 363.5764ms for pod "kube-scheduler-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:10.056650    4296 pod_ready.go:38] duration metric: took 2.4284671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:30:10.056650    4296 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:30:10.066822    4296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:30:10.097409    4296 api_server.go:72] duration metric: took 16.0169295s to wait for apiserver process to appear ...
	I0308 01:30:10.097409    4296 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:30:10.097409    4296 api_server.go:253] Checking apiserver healthz at https://172.20.59.53:8443/healthz ...
	I0308 01:30:10.103907    4296 api_server.go:279] https://172.20.59.53:8443/healthz returned 200:
	ok
	I0308 01:30:10.108614    4296 api_server.go:141] control plane version: v1.28.4
	I0308 01:30:10.108674    4296 api_server.go:131] duration metric: took 11.2651ms to wait for apiserver health ...
	I0308 01:30:10.108739    4296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:30:10.264607    4296 system_pods.go:59] 8 kube-system pods found
	I0308 01:30:10.264672    4296 system_pods.go:61] "coredns-5dd5756b68-6srjt" [685f0935-230c-4286-b225-28220d432dab] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "etcd-kindnet-503300" [b938033c-513e-46c3-b555-ade94b8be310] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kindnet-gb58t" [b90faeee-74b4-4a1c-9e75-d869293763cb] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-apiserver-kindnet-503300" [db3b13ea-ba3f-4fce-b11c-fe63cde5c504] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-controller-manager-kindnet-503300" [50f720f3-0896-459d-979f-41783837b456] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-proxy-gzd7d" [8a7e04cd-2cbd-44ba-a540-0de5f7f0a7a8] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-scheduler-kindnet-503300" [1d6f8e0e-0a3c-475d-964b-1bf24163896a] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "storage-provisioner" [c50192b1-15cb-4cfa-afa9-2814304000e1] Running
	I0308 01:30:10.264672    4296 system_pods.go:74] duration metric: took 155.9314ms to wait for pod list to return data ...
	I0308 01:30:10.264767    4296 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:30:10.466180    4296 default_sa.go:45] found service account: "default"
	I0308 01:30:10.466180    4296 default_sa.go:55] duration metric: took 201.4111ms for default service account to be created ...
	I0308 01:30:10.466180    4296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:30:10.679637    4296 system_pods.go:86] 8 kube-system pods found
	I0308 01:30:10.679637    4296 system_pods.go:89] "coredns-5dd5756b68-6srjt" [685f0935-230c-4286-b225-28220d432dab] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "etcd-kindnet-503300" [b938033c-513e-46c3-b555-ade94b8be310] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kindnet-gb58t" [b90faeee-74b4-4a1c-9e75-d869293763cb] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-apiserver-kindnet-503300" [db3b13ea-ba3f-4fce-b11c-fe63cde5c504] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-controller-manager-kindnet-503300" [50f720f3-0896-459d-979f-41783837b456] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-proxy-gzd7d" [8a7e04cd-2cbd-44ba-a540-0de5f7f0a7a8] Running
	I0308 01:30:10.680225    4296 system_pods.go:89] "kube-scheduler-kindnet-503300" [1d6f8e0e-0a3c-475d-964b-1bf24163896a] Running
	I0308 01:30:10.680290    4296 system_pods.go:89] "storage-provisioner" [c50192b1-15cb-4cfa-afa9-2814304000e1] Running
	I0308 01:30:10.680290    4296 system_pods.go:126] duration metric: took 214.1078ms to wait for k8s-apps to be running ...
	I0308 01:30:10.680350    4296 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:30:10.692062    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:30:10.720346    4296 system_svc.go:56] duration metric: took 39.9957ms WaitForService to wait for kubelet
	I0308 01:30:10.720346    4296 kubeadm.go:576] duration metric: took 16.6398608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:30:10.720346    4296 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:30:10.858937    4296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:30:10.859018    4296 node_conditions.go:123] node cpu capacity is 2
	I0308 01:30:10.859056    4296 node_conditions.go:105] duration metric: took 138.7084ms to run NodePressure ...
	I0308 01:30:10.859101    4296 start.go:240] waiting for startup goroutines ...
	I0308 01:30:10.859172    4296 start.go:245] waiting for cluster config update ...
	I0308 01:30:10.859207    4296 start.go:254] writing updated cluster config ...
	I0308 01:30:10.872572    4296 ssh_runner.go:195] Run: rm -f paused
	I0308 01:30:11.006933    4296 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:30:11.013918    4296 out.go:177] * Done! kubectl is now configured to use "kindnet-503300" cluster and "default" namespace by default
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:13.274663   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:13.274729   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:13.280577   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:13.281120   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:13.281120   14284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:30:13.414967   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:30:13.414967   14284 buildroot.go:166] provisioning hostname "calico-503300"
	I0308 01:30:13.414967   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:18.054021   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:18.054067   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:18.058583   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:18.058757   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:18.058757   14284 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-503300 && echo "calico-503300" | sudo tee /etc/hostname
	I0308 01:30:18.244577   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-503300
	
	I0308 01:30:18.244577   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:20.627255   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:20.627371   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:20.627480   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:23.271666   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:23.271954   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:23.277672   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:23.278247   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:23.278324   14284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:30:23.440906   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:30:23.440906   14284 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:30:23.440906   14284 buildroot.go:174] setting up certificates
	I0308 01:30:23.440906   14284 provision.go:84] configureAuth start
	I0308 01:30:23.440906   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:25.708510   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:25.708510   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:25.718651   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:28.381299   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:28.392546   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:28.392546   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:30.740270   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:30.752050   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:30.752050   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:33.329685   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:33.329745   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:33.329805   14284 provision.go:143] copyHostCerts
	I0308 01:30:33.330402   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:30:33.330402   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:30:33.330402   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:30:33.332442   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:30:33.332510   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:30:33.332921   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:30:33.334647   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:30:33.334647   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:30:33.335103   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:30:33.336487   14284 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-503300 san=[127.0.0.1 172.20.55.16 calico-503300 localhost minikube]
	I0308 01:30:33.587115   14284 provision.go:177] copyRemoteCerts
	I0308 01:30:33.604578   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:30:33.604743   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:35.692546   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:35.704009   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:35.704143   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:38.265930   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:38.265930   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:38.274418   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:30:38.382038   14284 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7773585s)
	I0308 01:30:38.382628   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:30:38.428950   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:30:38.472932   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 01:30:38.525120   14284 provision.go:87] duration metric: took 15.0840734s to configureAuth
	I0308 01:30:38.525160   14284 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:30:38.525201   14284 config.go:182] Loaded profile config "calico-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:30:38.525201   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:40.716673   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:40.716673   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:40.727582   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:43.295120   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:43.304180   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:43.309162   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:43.309162   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:43.309162   14284 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:30:43.444738   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:30:43.444738   14284 buildroot.go:70] root file system type: tmpfs
	I0308 01:30:43.445351   14284 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:30:43.445351   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:45.753656   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:45.753994   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:45.754068   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:48.225870   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:48.225870   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:48.235541   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:48.235808   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:48.235808   14284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:30:48.396113   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:30:48.396190   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:50.508717   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:50.509183   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:50.509183   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:53.355761   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:53.355844   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:53.361642   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:53.361706   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:53.361706   14284 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:30:54.715626   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:30:54.715626   14284 machine.go:97] duration metric: took 46.1157699s to provisionDockerMachine
	I0308 01:30:54.715626   14284 client.go:171] duration metric: took 2m1.8590205s to LocalClient.Create
	I0308 01:30:54.715626   14284 start.go:167] duration metric: took 2m1.8590205s to libmachine.API.Create "calico-503300"
	I0308 01:30:54.715626   14284 start.go:293] postStartSetup for "calico-503300" (driver="hyperv")
	I0308 01:30:54.715626   14284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:30:54.733992   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:30:54.733992   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:56.941284   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:56.941284   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:56.941372   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:59.483619   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:59.483619   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:59.484141   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:30:59.592982   14284 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8589447s)
	I0308 01:30:59.605308   14284 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:30:59.613403   14284 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:30:59.613403   14284 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:30:59.613920   14284 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:30:59.615125   14284 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:30:59.628577   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:30:59.648436   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:30:59.698552   14284 start.go:296] duration metric: took 4.9828794s for postStartSetup
	I0308 01:30:59.702505   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:01.968424   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:01.968424   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:01.968710   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:04.692971   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:04.702098   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:04.702231   14284 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\config.json ...
	I0308 01:31:04.705014   14284 start.go:128] duration metric: took 2m11.8549515s to createHost
	I0308 01:31:04.705539   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:06.897073   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:06.897073   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:06.897221   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:09.409715   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:09.420735   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:09.429113   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:09.430817   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:31:09.430870   14284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:31:09.569210   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861469.582091327
	
	I0308 01:31:09.569210   14284 fix.go:216] guest clock: 1709861469.582091327
	I0308 01:31:09.569210   14284 fix.go:229] Guest: 2024-03-08 01:31:09.582091327 +0000 UTC Remote: 2024-03-08 01:31:04.7050146 +0000 UTC m=+419.527802701 (delta=4.877076727s)
	I0308 01:31:09.569793   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:11.885181   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:11.885181   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:11.895195   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:14.542297   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:14.543574   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:14.551967   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:14.552643   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:31:14.552911   14284 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861469
	I0308 01:31:14.717004   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:31:09 UTC 2024
	
	I0308 01:31:14.717084   14284 fix.go:236] clock set: Fri Mar  8 01:31:09 UTC 2024
	 (err=<nil>)
	I0308 01:31:14.717084   14284 start.go:83] releasing machines lock for "calico-503300", held for 2m21.867637s
	I0308 01:31:14.717397   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:14.717447    3724 start.go:364] duration metric: took 5m27.8923768s to acquireMachinesLock for "pause-549000"
	I0308 01:31:14.717896    3724 start.go:96] Skipping create...Using existing machine configuration
	I0308 01:31:14.717975    3724 fix.go:54] fixHost starting: 
	I0308 01:31:14.718911    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:17.107281    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:17.107454    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:17.107584    3724 fix.go:112] recreateIfNeeded on pause-549000: state=Running err=<nil>
	W0308 01:31:17.107646    3724 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 01:31:17.110751    3724 out.go:177] * Updating the running hyperv "pause-549000" VM ...
	I0308 01:31:17.076934   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:17.076934   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:17.077006   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:19.935728   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:19.935728   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:19.943135   14284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:31:19.943135   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:19.968466   14284 ssh_runner.go:195] Run: cat /version.json
	I0308 01:31:19.968466   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:17.114486    3724 machine.go:94] provisionDockerMachine start ...
	I0308 01:31:17.114486    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:19.434022    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:19.435629    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:19.435629    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.659895   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:22.659964   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.659964   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.689600   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:22.689600   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.690126   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.589211    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:22.589211    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.596795    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:22.597929    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:22.597984    3724 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:31:22.766761    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-549000
	
	I0308 01:31:22.766761    3724 buildroot.go:166] provisioning hostname "pause-549000"
	I0308 01:31:22.766761    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:25.456219    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:25.459073    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.459130    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:25.875843   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:25.875843   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.876147   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:31:25.961562   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:25.961562   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.962322   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:31:26.054987   14284 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.1116836s)
	I0308 01:31:26.077460   14284 ssh_runner.go:235] Completed: cat /version.json: (6.1088515s)
	I0308 01:31:26.092259   14284 ssh_runner.go:195] Run: systemctl --version
	I0308 01:31:26.114776   14284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:31:26.124699   14284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:31:26.138800   14284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:31:26.173083   14284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:31:26.173174   14284 start.go:494] detecting cgroup driver to use...
	I0308 01:31:26.173452   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:31:26.222376   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:31:26.264899   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:31:26.290520   14284 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:31:26.306609   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:31:26.339950   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:31:26.374752   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:31:26.414542   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:31:26.455420   14284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:31:26.488088   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:31:26.527257   14284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:31:26.568510   14284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:31:26.606138   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:26.815489   14284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:31:26.856958   14284 start.go:494] detecting cgroup driver to use...
	I0308 01:31:26.873986   14284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:31:26.927028   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:31:26.969428   14284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:31:27.026562   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:31:27.064799   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:31:27.103913   14284 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:31:27.167082   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:31:27.199112   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:31:27.254951   14284 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:31:27.285154   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:31:27.313229   14284 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:31:27.376877   14284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:31:27.617827   14284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:31:27.878036   14284 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:31:27.878410   14284 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:31:27.928833   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:28.120434   14284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:31:29.739394   14284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6189458s)
	I0308 01:31:29.754659   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:31:29.795554   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:31:29.830347   14284 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:31:30.093648   14284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:31:30.352914   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:30.593532   14284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:31:30.657165   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:31:30.706398   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:30.902004   14284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:31:31.029866   14284 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:31:31.047041   14284 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:31:31.060335   14284 start.go:562] Will wait 60s for crictl version
	I0308 01:31:31.074458   14284 ssh_runner.go:195] Run: which crictl
	I0308 01:31:31.093408   14284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:31:31.180724   14284 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:31:31.195379   14284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:31:31.245102   14284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:31:28.357796    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:28.357796    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:28.373315    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:28.374079    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:28.374079    3724 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-549000 && echo "pause-549000" | sudo tee /etc/hostname
	I0308 01:31:28.523105    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-549000
	
	I0308 01:31:28.523207    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:30.802240    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:30.816134    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:30.816134    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:31.308239   14284 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:31:31.308357   14284 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:31:31.321255   14284 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:31:31.321255   14284 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:31:31.337592   14284 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:31:31.342880   14284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:31:31.377221   14284 kubeadm.go:877] updating cluster {Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:31:31.377631   14284 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:31:31.391773   14284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:31:31.418648   14284 docker.go:685] Got preloaded images: 
	I0308 01:31:31.418793   14284 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:31:31.431755   14284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:31:31.469906   14284 ssh_runner.go:195] Run: which lz4
	I0308 01:31:31.491973   14284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:31:31.498683   14284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:31:31.498925   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:31:34.435195   14284 docker.go:649] duration metric: took 2.9563584s to copy over tarball
	I0308 01:31:34.453969   14284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:31:34.397259    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:34.397439    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:34.405826    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:34.406977    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:34.407087    3724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-549000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-549000/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-549000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:31:34.567755    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:31:34.567755    3724 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:31:34.568296    3724 buildroot.go:174] setting up certificates
	I0308 01:31:34.568296    3724 provision.go:84] configureAuth start
	I0308 01:31:34.568371    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:37.048506    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:37.048700    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:37.049329    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:40.037206    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:40.037206    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:40.039622    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:43.168632   14284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7142164s)
	I0308 01:31:43.168710   14284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:31:43.243663   14284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:31:43.263037   14284 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:31:43.320475   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:43.543692   14284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:45.137447    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:45.139775    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:45.139775    3724 provision.go:143] copyHostCerts
	I0308 01:31:45.140154    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:31:45.140232    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:31:45.140719    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:31:45.141982    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:31:45.142071    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:31:45.142474    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:31:45.143757    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:31:45.143757    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:31:45.144288    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:31:45.145683    3724 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-549000 san=[127.0.0.1 172.20.54.215 localhost minikube pause-549000]
	I0308 01:31:45.715213    3724 provision.go:177] copyRemoteCerts
	I0308 01:31:45.731253    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:31:45.731253    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:47.286254   14284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.7425274s)
	I0308 01:31:47.303988   14284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:31:47.362952   14284 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:31:47.362952   14284 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:31:47.363082   14284 kubeadm.go:928] updating node { 172.20.55.16 8443 v1.28.4 docker true true} ...
	I0308 01:31:47.363461   14284 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.55.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0308 01:31:47.375488   14284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:31:47.437948   14284 cni.go:84] Creating CNI manager for "calico"
	I0308 01:31:47.437948   14284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:31:47.437948   14284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.55.16 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-503300 NodeName:calico-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.55.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.55.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:31:47.437948   14284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.55.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.55.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.55.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:31:47.456562   14284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:31:47.484967   14284 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:31:47.498196   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:31:47.519219   14284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 01:31:47.558050   14284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:31:47.594526   14284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0308 01:31:47.652652   14284 ssh_runner.go:195] Run: grep 172.20.55.16	control-plane.minikube.internal$ /etc/hosts
	I0308 01:31:47.662592   14284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.55.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:31:47.704294   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:47.953483   14284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:31:47.980742   14284 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300 for IP: 172.20.55.16
	I0308 01:31:47.980742   14284 certs.go:194] generating shared ca certs ...
	I0308 01:31:47.980742   14284 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:47.986471   14284 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:31:47.986937   14284 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:31:47.987125   14284 certs.go:256] generating profile certs ...
	I0308 01:31:47.987882   14284 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key
	I0308 01:31:47.988043   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt with IP's: []
	I0308 01:31:48.166879   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt ...
	I0308 01:31:48.166879   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt: {Name:mkef29162d9ddc9479d5d9954eda9121f483432f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.168435   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key ...
	I0308 01:31:48.168435   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key: {Name:mk770cd20d27827299fed4fccedf13ab7bf665de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.169736   14284 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0
	I0308 01:31:48.169922   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.55.16]
	I0308 01:31:48.531923   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 ...
	I0308 01:31:48.531923   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0: {Name:mke0e27e89fe08de672060d263a29c2ccc905996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.536652   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0 ...
	I0308 01:31:48.536652   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0: {Name:mk7f94d05b162d58e31a7c06c316d6bf3f534512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.538032   14284 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt
	I0308 01:31:48.552115   14284 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key
	I0308 01:31:48.552471   14284 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key
	I0308 01:31:48.553936   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt with IP's: []
	I0308 01:31:48.666580   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt ...
	I0308 01:31:48.666580   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt: {Name:mkac2d2459dde68e22d0324f5fae615dcb1db770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.671811   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key ...
	I0308 01:31:48.671811   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key: {Name:mkfe130b3c366a72d0ebcc741131ab1500ca22b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.687532   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:31:48.688241   14284 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:31:48.688241   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:31:48.688778   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:31:48.689397   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:31:48.689397   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:31:48.690391   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:31:48.693179   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:31:48.750483   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:31:48.790561   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:31:48.843293   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:31:48.896108   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:31:48.942706   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:31:49.005168   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:31:49.056308   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 01:31:49.110821   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:31:49.160925   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:31:49.209856   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:31:49.257672   14284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:31:49.318563   14284 ssh_runner.go:195] Run: openssl version
	I0308 01:31:49.348284   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:31:49.388998   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.396986   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.412136   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.443862   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:31:49.492566   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:31:49.534797   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.542870   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.561032   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.590046   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:31:49.633579   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:31:49.672596   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.683565   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.698835   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.720547   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:31:49.757568   14284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:31:49.764034   14284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:31:49.764478   14284 kubeadm.go:391] StartCluster: {Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:31:49.778030   14284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:31:49.823931   14284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:31:49.858331   14284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:31:49.901131   14284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:31:49.919504   14284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:31:49.919559   14284 kubeadm.go:156] found existing configuration files:
	
	I0308 01:31:49.939554   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:31:49.958419   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:31:49.972407   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:31:50.003237   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:31:50.021664   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:31:50.037994   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:31:50.079191   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:31:50.097270   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:31:50.111198   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:31:50.149614   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:31:50.169065   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:31:50.183407   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:31:50.201570   14284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:31:48.153886    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:48.154176    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:48.154236    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:51.015958    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:51.020136    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:51.020315    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:31:51.142017    3724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.4106182s)
	I0308 01:31:51.142612    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:31:51.194966    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 01:31:51.244338    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:31:51.290128    3724 provision.go:87] duration metric: took 16.7216768s to configureAuth
	I0308 01:31:51.290128    3724 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:31:51.290976    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:31:51.291120    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:50.515990   14284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:31:53.580699    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:53.580781    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:53.580781    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:56.407283    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:56.407283    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:56.416878    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:56.417566    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:56.417566    3724 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:31:56.566395    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:31:56.566395    3724 buildroot.go:70] root file system type: tmpfs
	I0308 01:31:56.567175    3724 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:31:56.567333    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:58.948936    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:58.949050    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:58.949050    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:06.550314   14284 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:32:06.553171   14284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:32:06.553245   14284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:32:06.553793   14284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:32:06.553986   14284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:32:06.554125   14284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:32:06.556868   14284 out.go:204]   - Generating certificates and keys ...
	I0308 01:32:06.557518   14284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:32:06.557733   14284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:32:06.557964   14284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:32:06.558298   14284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:32:06.558533   14284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:32:06.558776   14284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:32:06.559010   14284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:32:06.559121   14284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-503300 localhost] and IPs [172.20.55.16 127.0.0.1 ::1]
	I0308 01:32:06.559440   14284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:32:06.559809   14284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-503300 localhost] and IPs [172.20.55.16 127.0.0.1 ::1]
	I0308 01:32:06.560035   14284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:32:06.560216   14284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:32:06.560216   14284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:32:06.560386   14284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:32:06.561047   14284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:32:06.561411   14284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:32:06.561607   14284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:32:06.565270   14284 out.go:204]   - Booting up control plane ...
	I0308 01:32:06.565328   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:32:06.565920   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:32:06.567757   14284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:32:06.567911   14284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:32:06.567911   14284 kubeadm.go:309] [apiclient] All control plane components are healthy after 10.005664 seconds
	I0308 01:32:06.567911   14284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:32:06.569028   14284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:32:06.569028   14284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:32:06.569028   14284 kubeadm.go:309] [mark-control-plane] Marking the node calico-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:32:06.569028   14284 kubeadm.go:309] [bootstrap-token] Using token: ld1yy6.lquh2o9913bssi2z
	I0308 01:32:06.572237   14284 out.go:204]   - Configuring RBAC rules ...
	I0308 01:32:06.572784   14284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:32:06.573228   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:32:06.573688   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:32:06.573947   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:32:06.574347   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:32:06.574664   14284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:32:06.575090   14284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:32:06.575316   14284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:32:06.575359   14284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:32:06.575359   14284 kubeadm.go:309] 
	I0308 01:32:06.575359   14284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:32:06.575359   14284 kubeadm.go:309] 
	I0308 01:32:06.576006   14284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:32:06.576099   14284 kubeadm.go:309] 
	I0308 01:32:06.576170   14284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:32:06.576170   14284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:32:06.576569   14284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:32:06.577732   14284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:32:06.577882   14284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:32:06.577882   14284 kubeadm.go:309] 
	I0308 01:32:06.578463   14284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:32:06.578656   14284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:32:06.578656   14284 kubeadm.go:309] 
	I0308 01:32:06.578809   14284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ld1yy6.lquh2o9913bssi2z \
	I0308 01:32:06.580167   14284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:32:06.580167   14284 kubeadm.go:309] 	--control-plane 
	I0308 01:32:06.580418   14284 kubeadm.go:309] 
	I0308 01:32:06.580770   14284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:32:06.580770   14284 kubeadm.go:309] 
	I0308 01:32:06.581011   14284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ld1yy6.lquh2o9913bssi2z \
	I0308 01:32:06.581320   14284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:32:06.581320   14284 cni.go:84] Creating CNI manager for "calico"
	I0308 01:32:06.583717   14284 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0308 01:32:01.786492    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:01.791112    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:01.798191    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:01.799313    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:01.799491    3724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:32:01.982658    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:32:01.982831    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:04.455244    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:04.456660    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:04.456660    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:06.587675   14284 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 01:32:06.588248   14284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (252439 bytes)
	I0308 01:32:06.731728   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 01:32:10.301441   14284 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.5696795s)
	I0308 01:32:10.301441   14284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:32:10.324910   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:10.326451   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-503300 minikube.k8s.io/updated_at=2024_03_08T01_32_10_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=calico-503300 minikube.k8s.io/primary=true
	I0308 01:32:10.345197   14284 ops.go:34] apiserver oom_adj: -16
	I0308 01:32:07.354237    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:07.354237    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:07.361709    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:07.361709    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:07.362435    3724 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:32:07.508362    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:32:07.508362    3724 machine.go:97] duration metric: took 50.3934071s to provisionDockerMachine
	I0308 01:32:07.508500    3724 start.go:293] postStartSetup for "pause-549000" (driver="hyperv")
	I0308 01:32:07.508500    3724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:32:07.525595    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:32:07.525595    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:10.033934    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:10.034985    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:10.035202    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:10.594501   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:11.092068   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:11.603362   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.102353   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.604425   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:13.106122   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:13.613284   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:14.092885   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:14.597242   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:15.109011   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.886440    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:12.891049    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:12.891433    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:13.003262    3724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.4775264s)
	I0308 01:32:13.025232    3724 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:32:13.038451    3724 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:32:13.038583    3724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:32:13.038583    3724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:32:13.040286    3724 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:32:13.055222    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:32:13.075712    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:32:13.160118    3724 start.go:296] duration metric: took 5.6514936s for postStartSetup
	I0308 01:32:13.160264    3724 fix.go:56] duration metric: took 58.4417447s for fixHost
	I0308 01:32:13.160377    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:15.536028    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:15.547316    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:15.547316    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:15.599757   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:16.113880   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:16.612862   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:17.113933   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:17.603732   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.105417   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.598224   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.820443   14284 kubeadm.go:1106] duration metric: took 8.5189231s to wait for elevateKubeSystemPrivileges
	W0308 01:32:18.820626   14284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:32:18.820626   14284 kubeadm.go:393] duration metric: took 29.0558776s to StartCluster
	I0308 01:32:18.820883   14284 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:18.821090   14284 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:32:18.824700   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:18.827000   14284 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:32:18.830010   14284 out.go:177] * Verifying Kubernetes components...
	I0308 01:32:18.827212   14284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:32:18.827795   14284 config.go:182] Loaded profile config "calico-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:32:18.827891   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:32:18.830010   14284 addons.go:69] Setting storage-provisioner=true in profile "calico-503300"
	I0308 01:32:18.830010   14284 addons.go:69] Setting default-storageclass=true in profile "calico-503300"
	I0308 01:32:18.833493   14284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-503300"
	I0308 01:32:18.833493   14284 addons.go:234] Setting addon storage-provisioner=true in "calico-503300"
	I0308 01:32:18.833598   14284 host.go:66] Checking if "calico-503300" exists ...
	I0308 01:32:18.834480   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:18.834908   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:18.858668   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:19.462671   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:32:19.563352   14284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:32:18.417725    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:18.417725    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:18.428830    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:18.429778    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:18.429814    3724 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:32:18.555707    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861538.565947171
	
	I0308 01:32:18.555707    3724 fix.go:216] guest clock: 1709861538.565947171
	I0308 01:32:18.555707    3724 fix.go:229] Guest: 2024-03-08 01:32:18.565947171 +0000 UTC Remote: 2024-03-08 01:32:13.1603011 +0000 UTC m=+391.671528101 (delta=5.405646071s)
	I0308 01:32:18.555707    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:21.511772    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.528157    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.528157    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:21.861513   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.861513   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.865925   14284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:32:21.869793   14284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:32:21.869926   14284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:32:21.870016   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:21.929426   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.929496   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.936187   14284 addons.go:234] Setting addon default-storageclass=true in "calico-503300"
	I0308 01:32:21.936462   14284 host.go:66] Checking if "calico-503300" exists ...
	I0308 01:32:21.938134   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:22.408833   14284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.9461352s)
	I0308 01:32:22.408833   14284 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.845455s)
	I0308 01:32:22.408833   14284 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:32:22.413738   14284 node_ready.go:35] waiting up to 15m0s for node "calico-503300" to be "Ready" ...
	I0308 01:32:22.968752   14284 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-503300" context rescaled to 1 replicas
	I0308 01:32:24.423205   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:25.287277   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:25.287493   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.287586   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:25.587204    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:25.587304    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.596317    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:25.597134    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:25.597134    3724 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861538
	I0308 01:32:25.784230    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:32:18 UTC 2024
	
	I0308 01:32:25.784230    3724 fix.go:236] clock set: Fri Mar  8 01:32:18 UTC 2024
	 (err=<nil>)
	I0308 01:32:25.784230    3724 start.go:83] releasing machines lock for "pause-549000", held for 1m11.0658999s
	I0308 01:32:25.784230    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:25.506584   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:25.510139   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.510420   14284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:32:25.510420   14284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:32:25.510564   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:26.431837   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:28.471717   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:28.473907   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.474156   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:28.807817   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:32:28.807817   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.813008   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:32:28.938613   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:28.979472   14284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:32:30.145541   14284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1657935s)
	I0308 01:32:28.779718    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:28.788428    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.788428    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:31.431609   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:31.573513   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:32:31.573513   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:31.573902   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:32:31.815458   14284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:32:32.315461   14284 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:32:32.319250   14284 addons.go:505] duration metric: took 13.491967s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:32:34.383531   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:31.954483    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:31.958462    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:31.962873    3724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:32:31.963031    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:31.983205    3724 ssh_runner.go:195] Run: cat /version.json
	I0308 01:32:31.983397    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:34.560082    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:34.560164    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:34.560267    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:34.655083    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:34.655083    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:34.655202    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:36.686467   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:38.934637   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:37.763987    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:37.763987    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:37.771940    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:37.849496    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:37.849496    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:37.849496    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:37.868364    3724 ssh_runner.go:235] Completed: cat /version.json: (5.8850483s)
	I0308 01:32:37.882222    3724 ssh_runner.go:195] Run: systemctl --version
	I0308 01:32:37.917749    3724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 01:32:39.940789    3724 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0230213s)
	W0308 01:32:39.940789    3724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:32:39.940789    3724 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.9777954s)
	W0308 01:32:39.941391    3724 start.go:862] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0308 01:32:39.941544    3724 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0308 01:32:39.941630    3724 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0308 01:32:39.954038    3724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:32:39.969958    3724 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 01:32:39.969958    3724 start.go:494] detecting cgroup driver to use...
	I0308 01:32:39.972112    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:32:40.039870    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:32:40.082416    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:32:40.112111    3724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:32:40.132293    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:32:40.169725    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:32:40.209615    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:32:40.243573    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:32:40.284812    3724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:32:40.323147    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:32:40.370756    3724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:32:40.407008    3724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:32:40.454931    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:40.747187    3724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:32:40.786235    3724 start.go:494] detecting cgroup driver to use...
	I0308 01:32:40.801148    3724 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:32:40.847229    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:32:40.888865    3724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:32:40.956169    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:32:41.012361    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:32:41.055468    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:32:41.113185    3724 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:32:41.142380    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:32:41.163862    3724 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:32:41.216077    3724 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:32:41.448380   14284 node_ready.go:49] node "calico-503300" has status "Ready":"True"
	I0308 01:32:41.448514   14284 node_ready.go:38] duration metric: took 19.034386s for node "calico-503300" to be "Ready" ...
	I0308 01:32:41.448568   14284 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:32:41.473951   14284 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace to be "Ready" ...
	I0308 01:32:43.697040   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:41.705397    3724 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:32:42.245684    3724 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:32:42.245997    3724 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:32:42.346853    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:42.863726    3724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:32:45.997524   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:48.495020   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:50.497376   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:52.546184   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:54.999141   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:55.057577    3724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.1936506s)
	I0308 01:32:55.070127    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:32:55.124365    3724 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0308 01:32:55.176152    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:32:55.224044    3724 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:32:55.535376    3724 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:32:55.836340    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:56.086399    3724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:32:56.134533    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:32:56.183170    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:56.492756    3724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:32:56.644796    3724 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:32:56.662378    3724 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:32:56.795379    3724 start.go:562] Will wait 60s for crictl version
	I0308 01:32:56.813454    3724 ssh_runner.go:195] Run: which crictl
	I0308 01:32:56.838565    3724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:32:57.014099    3724 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:32:57.029571    3724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:32:57.089370    3724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:32:57.496362   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:59.554087   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:57.136917    3724 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:32:57.137164    3724 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:32:57.149765    3724 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:32:57.149765    3724 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:32:57.170678    3724 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:32:57.177951    3724 kubeadm.go:877] updating cluster {Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin
:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:32:57.178389    3724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:32:57.191837    3724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:32:57.236219    3724 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:32:57.236219    3724 docker.go:615] Images already preloaded, skipping extraction
	I0308 01:32:57.250188    3724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:32:57.331998    3724 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:32:57.332112    3724 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:32:57.332167    3724 kubeadm.go:928] updating node { 172.20.54.215 8443 v1.28.4 docker true true} ...
	I0308 01:32:57.332262    3724 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-549000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 01:32:57.347524    3724 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:32:57.431910    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:32:57.431910    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:32:57.431910    3724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:32:57.431910    3724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.54.215 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-549000 NodeName:pause-549000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.54.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.54.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:32:57.432450    3724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.54.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-549000"
	  kubeletExtraArgs:
	    node-ip: 172.20.54.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.54.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:32:57.456499    3724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:32:57.482587    3724 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:32:57.504743    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:32:57.524499    3724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 01:32:57.570114    3724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:32:57.671074    3724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0308 01:32:57.759484    3724 ssh_runner.go:195] Run: grep 172.20.54.215	control-plane.minikube.internal$ /etc/hosts
	I0308 01:32:57.787378    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:58.256950    3724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:32:58.311600    3724 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000 for IP: 172.20.54.215
	I0308 01:32:58.311600    3724 certs.go:194] generating shared ca certs ...
	I0308 01:32:58.311600    3724 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:58.312870    3724 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:32:58.313416    3724 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:32:58.313624    3724 certs.go:256] generating profile certs ...
	I0308 01:32:58.314389    3724 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\client.key
	I0308 01:32:58.314732    3724 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.key.61ed7ffd
	I0308 01:32:58.315195    3724 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.key
	I0308 01:32:58.317644    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:32:58.318207    3724 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:32:58.318361    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:32:58.318843    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:32:58.319240    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:32:58.319545    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:32:58.319545    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:32:58.321838    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:32:58.479372    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:32:58.621191    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:32:58.727121    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:32:58.930703    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:32:59.164913    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 01:32:59.296663    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:32:59.410620    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 01:32:59.526978    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:32:59.649578    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:32:59.765629    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:32:59.879840    3724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:32:59.968684    3724 ssh_runner.go:195] Run: openssl version
	I0308 01:33:00.022761    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:33:00.081169    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.088704    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.114084    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.146863    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:33:00.198124    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:33:00.251750    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.261748    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.286137    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.322709    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:33:00.374386    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:33:00.418362    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.429677    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.444242    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.473574    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:33:00.527482    3724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:33:00.557187    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 01:33:00.599946    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 01:33:00.644340    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 01:33:00.677432    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 01:33:00.714491    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 01:33:00.747220    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 01:33:00.763151    3724 kubeadm.go:391] StartCluster: {Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fa
lse olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:33:00.777563    3724 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:33:00.840918    3724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 01:33:00.866834    3724 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 01:33:00.866834    3724 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 01:33:00.866834    3724 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 01:33:00.884019    3724 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 01:33:00.905768    3724 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:00.909110    3724 kubeconfig.go:125] found "pause-549000" server: "https://172.20.54.215:8443"
	I0308 01:33:00.913854    3724 kapi.go:59] client config for pause-549000: &rest.Config{Host:"https://172.20.54.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-549000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-549000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 01:33:00.937550    3724 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 01:33:00.960943    3724 kubeadm.go:624] The running cluster does not require reconfiguration: 172.20.54.215
	I0308 01:33:00.961002    3724 kubeadm.go:1153] stopping kube-system containers ...
	I0308 01:33:00.972700    3724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:33:01.030510    3724 docker.go:483] Stopping containers: [6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35]
	I0308 01:33:01.043286    3724 ssh_runner.go:195] Run: docker stop 6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35
	I0308 01:33:02.033488   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:04.483961   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:06.994377   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:07.991022   14284 pod_ready.go:92] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:07.991064   14284 pod_ready.go:81] duration metric: took 26.5168703s for pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:07.991096   14284 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-ft27j" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.011295   14284 pod_ready.go:92] pod "calico-node-ft27j" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.011405   14284 pod_ready.go:81] duration metric: took 2.0202914s for pod "calico-node-ft27j" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.011481   14284 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.024490   14284 pod_ready.go:92] pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.024490   14284 pod_ready.go:81] duration metric: took 13.0089ms for pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.024490   14284 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.029662   14284 pod_ready.go:92] pod "etcd-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.029662   14284 pod_ready.go:81] duration metric: took 5.172ms for pod "etcd-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.029662   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.042284   14284 pod_ready.go:92] pod "kube-apiserver-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.042338   14284 pod_ready.go:81] duration metric: took 12.6762ms for pod "kube-apiserver-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.042338   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.050208   14284 pod_ready.go:92] pod "kube-controller-manager-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.050208   14284 pod_ready.go:81] duration metric: took 7.8701ms for pod "kube-controller-manager-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.050208   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-fplhq" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.560604    3724 ssh_runner.go:235] Completed: docker stop 6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35: (9.517125s)
	I0308 01:33:10.580045    3724 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 01:33:10.663503    3724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:33:10.693512    3724 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar  8 01:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Mar  8 01:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar  8 01:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Mar  8 01:25 /etc/kubernetes/scheduler.conf
	
	I0308 01:33:10.707636    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:33:10.745474    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:33:10.777558    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:33:10.799546    3724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:10.816580    3724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:33:10.856857    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:33:10.882714    3724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:10.899063    3724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:33:10.927760    3724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:33:10.944518    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:11.067680    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:10.417308   14284 pod_ready.go:92] pod "kube-proxy-fplhq" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.417308   14284 pod_ready.go:81] duration metric: took 367.0964ms for pod "kube-proxy-fplhq" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.417408   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.816580   14284 pod_ready.go:92] pod "kube-scheduler-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.816580   14284 pod_ready.go:81] duration metric: took 399.1676ms for pod "kube-scheduler-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.816580   14284 pod_ready.go:38] duration metric: took 29.3676894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:10.817165   14284 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:10.836829   14284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:10.871770   14284 api_server.go:72] duration metric: took 52.044227s to wait for apiserver process to appear ...
	I0308 01:33:10.872065   14284 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:10.872065   14284 api_server.go:253] Checking apiserver healthz at https://172.20.55.16:8443/healthz ...
	I0308 01:33:10.881083   14284 api_server.go:279] https://172.20.55.16:8443/healthz returned 200:
	ok
	I0308 01:33:10.886756   14284 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:10.886756   14284 api_server.go:131] duration metric: took 14.691ms to wait for apiserver health ...
	I0308 01:33:10.887303   14284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:11.036422   14284 system_pods.go:59] 9 kube-system pods found
	I0308 01:33:11.036422   14284 system_pods.go:61] "calico-kube-controllers-5fc7d6cf67-85tcb" [38a54c04-2e49-40fa-b967-ee05cd6fe5da] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "calico-node-ft27j" [b5373aca-4680-478b-8c0c-dc23e6c42dd5] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "coredns-5dd5756b68-bfdql" [6cfc0369-6ac8-4950-bc73-f73eb8930433] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "etcd-calico-503300" [1196d668-93b6-456c-9b38-dd4df91fc430] Running
	I0308 01:33:11.036989   14284 system_pods.go:61] "kube-apiserver-calico-503300" [2771afef-fd80-4658-85e3-a5922a7a24f9] Running
	I0308 01:33:11.037053   14284 system_pods.go:61] "kube-controller-manager-calico-503300" [0219c0b0-2485-4ab2-a40a-471399f6b59d] Running
	I0308 01:33:11.037089   14284 system_pods.go:61] "kube-proxy-fplhq" [2e488d0d-d07d-495b-9b04-db460bb0f650] Running
	I0308 01:33:11.037131   14284 system_pods.go:61] "kube-scheduler-calico-503300" [8f5f359e-c9a5-471e-b069-7cd6f272f204] Running
	I0308 01:33:11.037131   14284 system_pods.go:61] "storage-provisioner" [76cea2ce-35a3-41d2-aa15-bd300ad66a38] Running
	I0308 01:33:11.037170   14284 system_pods.go:74] duration metric: took 149.8656ms to wait for pod list to return data ...
	I0308 01:33:11.037170   14284 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:33:11.215548   14284 default_sa.go:45] found service account: "default"
	I0308 01:33:11.215548   14284 default_sa.go:55] duration metric: took 178.3766ms for default service account to be created ...
	I0308 01:33:11.215548   14284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:33:11.426531   14284 system_pods.go:86] 9 kube-system pods found
	I0308 01:33:11.426531   14284 system_pods.go:89] "calico-kube-controllers-5fc7d6cf67-85tcb" [38a54c04-2e49-40fa-b967-ee05cd6fe5da] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "calico-node-ft27j" [b5373aca-4680-478b-8c0c-dc23e6c42dd5] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "coredns-5dd5756b68-bfdql" [6cfc0369-6ac8-4950-bc73-f73eb8930433] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "etcd-calico-503300" [1196d668-93b6-456c-9b38-dd4df91fc430] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-apiserver-calico-503300" [2771afef-fd80-4658-85e3-a5922a7a24f9] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-controller-manager-calico-503300" [0219c0b0-2485-4ab2-a40a-471399f6b59d] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-proxy-fplhq" [2e488d0d-d07d-495b-9b04-db460bb0f650] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-scheduler-calico-503300" [8f5f359e-c9a5-471e-b069-7cd6f272f204] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "storage-provisioner" [76cea2ce-35a3-41d2-aa15-bd300ad66a38] Running
	I0308 01:33:11.426531   14284 system_pods.go:126] duration metric: took 210.9809ms to wait for k8s-apps to be running ...
	I0308 01:33:11.426531   14284 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:33:11.446244   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:33:11.469822   14284 system_svc.go:56] duration metric: took 43.2907ms WaitForService to wait for kubelet
	I0308 01:33:11.471585   14284 kubeadm.go:576] duration metric: took 52.6440361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:33:11.471585   14284 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:11.603699   14284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:11.603699   14284 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:11.603699   14284 node_conditions.go:105] duration metric: took 132.1134ms to run NodePressure ...
	I0308 01:33:11.603699   14284 start.go:240] waiting for startup goroutines ...
	I0308 01:33:11.603699   14284 start.go:245] waiting for cluster config update ...
	I0308 01:33:11.603699   14284 start.go:254] writing updated cluster config ...
	I0308 01:33:11.619534   14284 ssh_runner.go:195] Run: rm -f paused
	I0308 01:33:11.784862   14284 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:33:11.788007   14284 out.go:177] * Done! kubectl is now configured to use "calico-503300" cluster and "default" namespace by default
	I0308 01:33:12.285351    3724 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2175822s)
	I0308 01:33:12.285351    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.672976    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.772749    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.897354    3724 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:12.910697    3724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:12.941451    3724 api_server.go:72] duration metric: took 43.9369ms to wait for apiserver process to appear ...
	I0308 01:33:12.941451    3724 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:12.941548    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:17.952426    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0308 01:33:17.952505    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:22.968019    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0308 01:33:22.968019    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:24.994360    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": read tcp 172.20.48.1:58524->172.20.54.215:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0308 01:33:24.994564    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:27.051479    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": dial tcp 172.20.54.215:8443: connectex: No connection could be made because the target machine actively refused it.
	I0308 01:33:27.051605    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:30.837681    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 01:33:30.837719    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 01:33:30.837719    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:30.922613    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 01:33:30.923703    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 01:33:30.953030    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.016429    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.016586    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:31.443461    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.457645    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.457645    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:31.947419    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.961418    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.961516    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:32.450061    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:32.461345    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 200:
	ok
	I0308 01:33:32.480590    3724 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:32.480646    3724 api_server.go:131] duration metric: took 19.5390137s to wait for apiserver health ...
	I0308 01:33:32.480688    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:33:32.480688    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:33:32.483537    3724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 01:33:32.494603    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 01:33:32.523107    3724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 01:33:32.564524    3724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:32.584227    3724 system_pods.go:59] 6 kube-system pods found
	I0308 01:33:32.584227    3724 system_pods.go:61] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 01:33:32.584227    3724 system_pods.go:61] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 01:33:32.584227    3724 system_pods.go:61] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 01:33:32.584769    3724 system_pods.go:61] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 01:33:32.584880    3724 system_pods.go:61] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 01:33:32.584880    3724 system_pods.go:61] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 01:33:32.584880    3724 system_pods.go:74] duration metric: took 20.3026ms to wait for pod list to return data ...
	I0308 01:33:32.584880    3724 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:32.592265    3724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:32.592265    3724 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:32.592265    3724 node_conditions.go:105] duration metric: took 7.385ms to run NodePressure ...
	I0308 01:33:32.592265    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:33.311154    3724 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 01:33:33.345015    3724 kubeadm.go:733] kubelet initialised
	I0308 01:33:33.345015    3724 kubeadm.go:734] duration metric: took 33.8052ms waiting for restarted kubelet to initialise ...
	I0308 01:33:33.345015    3724 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:33.369547    3724 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:34.993719    3724 pod_ready.go:92] pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:34.993816    3724 pod_ready.go:81] duration metric: took 1.6241565s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:34.993853    3724 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.012933    3724 pod_ready.go:92] pod "etcd-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.012933    3724 pod_ready.go:81] duration metric: took 2.0190607s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.012933    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.027883    3724 pod_ready.go:92] pod "kube-apiserver-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.027883    3724 pod_ready.go:81] duration metric: took 14.9505ms for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.027883    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.038715    3724 pod_ready.go:92] pod "kube-controller-manager-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.038715    3724 pod_ready.go:81] duration metric: took 10.8315ms for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.038715    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.048854    3724 pod_ready.go:92] pod "kube-proxy-z8xr2" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.048854    3724 pod_ready.go:81] duration metric: took 10.1392ms for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.048854    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.062620    3724 pod_ready.go:92] pod "kube-scheduler-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.062677    3724 pod_ready.go:81] duration metric: took 13.7657ms for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.062677    3724 pod_ready.go:38] duration metric: took 3.7176278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:37.062732    3724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:33:37.087173    3724 ops.go:34] apiserver oom_adj: -16
	I0308 01:33:37.087269    3724 kubeadm.go:591] duration metric: took 36.2201028s to restartPrimaryControlPlane
	I0308 01:33:37.087321    3724 kubeadm.go:393] duration metric: took 36.3238928s to StartCluster
	I0308 01:33:37.087476    3724 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:33:37.087655    3724 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:33:37.091619    3724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:33:37.093647    3724 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:33:37.093647    3724 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:33:37.094192    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:33:37.427652    3724 out.go:177] * Enabled addons: 
	I0308 01:33:37.377084    3724 out.go:177] * Verifying Kubernetes components...
	I0308 01:33:37.597442    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:33:37.614513    3724 addons.go:505] duration metric: took 520.8611ms for enable addons: enabled=[]
	I0308 01:33:37.956483    3724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:33:37.986344    3724 node_ready.go:35] waiting up to 6m0s for node "pause-549000" to be "Ready" ...
	I0308 01:33:37.992045    3724 node_ready.go:49] node "pause-549000" has status "Ready":"True"
	I0308 01:33:37.992045    3724 node_ready.go:38] duration metric: took 5.7015ms for node "pause-549000" to be "Ready" ...
	I0308 01:33:37.992045    3724 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:38.011395    3724 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.016764    3724 pod_ready.go:92] pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.016764    3724 pod_ready.go:81] duration metric: took 5.3684ms for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.016764    3724 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.223033    3724 pod_ready.go:92] pod "etcd-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.223085    3724 pod_ready.go:81] duration metric: took 206.3199ms for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.223085    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.613742    3724 pod_ready.go:92] pod "kube-apiserver-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.613742    3724 pod_ready.go:81] duration metric: took 390.6527ms for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.613742    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.011541    3724 pod_ready.go:92] pod "kube-controller-manager-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.011693    3724 pod_ready.go:81] duration metric: took 397.9474ms for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.011693    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.420342    3724 pod_ready.go:92] pod "kube-proxy-z8xr2" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.420342    3724 pod_ready.go:81] duration metric: took 408.6454ms for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.420427    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.818930    3724 pod_ready.go:92] pod "kube-scheduler-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.818930    3724 pod_ready.go:81] duration metric: took 398.4991ms for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.818930    3724 pod_ready.go:38] duration metric: took 1.8268672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:39.818930    3724 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:39.837796    3724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:39.869210    3724 api_server.go:72] duration metric: took 2.775537s to wait for apiserver process to appear ...
	I0308 01:33:39.869274    3724 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:39.869274    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:39.881871    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 200:
	ok
	I0308 01:33:39.884801    3724 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:39.884914    3724 api_server.go:131] duration metric: took 15.5269ms to wait for apiserver health ...
	I0308 01:33:39.884914    3724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:40.015167    3724 system_pods.go:59] 6 kube-system pods found
	I0308 01:33:40.015167    3724 system_pods.go:61] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running
	I0308 01:33:40.015167    3724 system_pods.go:74] duration metric: took 130.2514ms to wait for pod list to return data ...
	I0308 01:33:40.015167    3724 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:33:40.218050    3724 default_sa.go:45] found service account: "default"
	I0308 01:33:40.218050    3724 default_sa.go:55] duration metric: took 202.8813ms for default service account to be created ...
	I0308 01:33:40.218050    3724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:33:40.413640    3724 system_pods.go:86] 6 kube-system pods found
	I0308 01:33:40.413640    3724 system_pods.go:89] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running
	I0308 01:33:40.413640    3724 system_pods.go:126] duration metric: took 195.5885ms to wait for k8s-apps to be running ...
	I0308 01:33:40.413640    3724 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:33:40.427226    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:33:40.460832    3724 system_svc.go:56] duration metric: took 47.1919ms WaitForService to wait for kubelet
	I0308 01:33:40.461008    3724 kubeadm.go:576] duration metric: took 3.3673294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:33:40.461057    3724 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:40.610529    3724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:40.610529    3724 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:40.610529    3724 node_conditions.go:105] duration metric: took 149.4199ms to run NodePressure ...
	I0308 01:33:40.610529    3724 start.go:240] waiting for startup goroutines ...
	I0308 01:33:40.610529    3724 start.go:245] waiting for cluster config update ...
	I0308 01:33:40.610529    3724 start.go:254] writing updated cluster config ...
	I0308 01:33:40.626078    3724 ssh_runner.go:195] Run: rm -f paused
	I0308 01:33:40.779094    3724 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:33:40.783065    3724 out.go:177] * Done! kubectl is now configured to use "pause-549000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.029443316Z" level=info msg="shim disconnected" id=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b namespace=moby
	Mar 08 01:33:25 pause-549000 dockerd[8317]: time="2024-03-08T01:33:25.029667717Z" level=info msg="ignoring event" container=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.030244721Z" level=warning msg="cleaning up after shim disconnected" id=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b namespace=moby
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.030472723Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.551869276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.551984377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.552009277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.552195079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.606894376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.607008877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.607044177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.608956791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787138185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787228085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787250485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.788033191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:31 pause-549000 cri-dockerd[8593]: time="2024-03-08T01:33:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.034410894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.037593417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.038030620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.041526246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047815891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047899992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047924392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.048503996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbd8e6022b6dc       83f6cc407eed8       About a minute ago   Running             kube-proxy                2                   1aedc88943e82       kube-proxy-z8xr2
	8cbe07a1943bd       ead0a4a53df89       About a minute ago   Running             coredns                   2                   f54c931dc863c       coredns-5dd5756b68-2q5bn
	70615601747b1       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            3                   50c6152f1e6ae       kube-apiserver-pause-549000
	15d872b3f05b6       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   2                   916b477f617a7       kube-controller-manager-pause-549000
	97d7744bd1667       73deb9a3f7025       About a minute ago   Running             etcd                      2                   79b4c6c608b30       etcd-pause-549000
	a3aed9f888fa4       7fe0e6f37db33       About a minute ago   Exited              kube-apiserver            2                   50c6152f1e6ae       kube-apiserver-pause-549000
	be9eaddf3ddcc       e3db313c6dbc0       About a minute ago   Running             kube-scheduler            2                   d2335daff70e6       kube-scheduler-pause-549000
	6cbd157ab876a       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   a4f413a3fab36       coredns-5dd5756b68-2q5bn
	1650ae73fce37       83f6cc407eed8       About a minute ago   Exited              kube-proxy                1                   c7cf0231ec497       kube-proxy-z8xr2
	ca0870d599f16       d058aa5ab969c       About a minute ago   Exited              kube-controller-manager   1                   62c0412021bfe       kube-controller-manager-pause-549000
	8961256e70cbe       73deb9a3f7025       About a minute ago   Exited              etcd                      1                   7a74af2b7663b       etcd-pause-549000
	0fe3021a276bc       e3db313c6dbc0       About a minute ago   Exited              kube-scheduler            1                   96387479d6922       kube-scheduler-pause-549000
	
	
	==> coredns [6cbd157ab876] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43621 - 31752 "HINFO IN 1724806026985328266.1499265499857429649. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.491006586s
	
	
	==> coredns [8cbe07a1943b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36044 - 58758 "HINFO IN 3020783146318684593.1048067044606582722. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062755353s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +0.094654] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.804530] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.124510] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.982461] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.129624] systemd-fstab-generator[3406]: Ignoring "noauto" option for root device
	[  +9.073674] kauditd_printk_skb: 82 callbacks suppressed
	[Mar 8 01:28] hrtimer: interrupt took 2131306 ns
	[Mar 8 01:32] systemd-fstab-generator[7887]: Ignoring "noauto" option for root device
	[  +0.843188] systemd-fstab-generator[7932]: Ignoring "noauto" option for root device
	[  +0.466922] systemd-fstab-generator[7944]: Ignoring "noauto" option for root device
	[  +0.739458] systemd-fstab-generator[7963]: Ignoring "noauto" option for root device
	[  +5.480217] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.247218] systemd-fstab-generator[8486]: Ignoring "noauto" option for root device
	[  +0.330010] systemd-fstab-generator[8498]: Ignoring "noauto" option for root device
	[  +0.265887] systemd-fstab-generator[8510]: Ignoring "noauto" option for root device
	[  +0.380888] systemd-fstab-generator[8552]: Ignoring "noauto" option for root device
	[  +1.643557] systemd-fstab-generator[8982]: Ignoring "noauto" option for root device
	[  +1.762188] kauditd_printk_skb: 179 callbacks suppressed
	[Mar 8 01:33] kauditd_printk_skb: 60 callbacks suppressed
	[  +2.019559] systemd-fstab-generator[10364]: Ignoring "noauto" option for root device
	[ +12.547255] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.138076] kauditd_printk_skb: 6 callbacks suppressed
	[  +4.559628] systemd-fstab-generator[11098]: Ignoring "noauto" option for root device
	[  +6.322832] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.104317] systemd-fstab-generator[11266]: Ignoring "noauto" option for root device
	
	
	==> etcd [8961256e70cb] <==
	{"level":"info","ts":"2024-03-08T01:33:00.130109Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"71.16665ms"}
	{"level":"info","ts":"2024-03-08T01:33:00.152372Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-08T01:33:00.275258Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","commit-index":611}
	{"level":"info","ts":"2024-03-08T01:33:00.279721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-08T01:33:00.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became follower at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:00.284425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8cb6433ac2f96c64 [peers: [], term: 2, commit: 611, applied: 0, lastindex: 611, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-08T01:33:00.323467Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-08T01:33:00.3741Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2024-03-08T01:33:00.384444Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-08T01:33:00.417235Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"8cb6433ac2f96c64","timeout":"7s"}
	{"level":"info","ts":"2024-03-08T01:33:00.420184Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"8cb6433ac2f96c64"}
	{"level":"info","ts":"2024-03-08T01:33:00.420867Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8cb6433ac2f96c64","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-08T01:33:00.421594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.421936Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.422592Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.424123Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-08T01:33:00.425615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=(10139365530729540708)"}
	{"level":"info","ts":"2024-03-08T01:33:00.426299Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","added-peer-id":"8cb6433ac2f96c64","added-peer-peer-urls":["https://172.20.54.215:2380"]}
	{"level":"info","ts":"2024-03-08T01:33:00.427583Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:00.427998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:00.455983Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T01:33:00.457193Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8cb6433ac2f96c64","initial-advertise-peer-urls":["https://172.20.54.215:2380"],"listen-peer-urls":["https://172.20.54.215:2380"],"advertise-client-urls":["https://172.20.54.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.54.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T01:33:00.457393Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:00.465505Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:00.459028Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [97d7744bd166] <==
	{"level":"info","ts":"2024-03-08T01:33:26.933926Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:26.934722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=(10139365530729540708)"}
	{"level":"info","ts":"2024-03-08T01:33:26.935129Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","added-peer-id":"8cb6433ac2f96c64","added-peer-peer-urls":["https://172.20.54.215:2380"]}
	{"level":"info","ts":"2024-03-08T01:33:26.935607Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:26.94044Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:26.965723Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:26.965972Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:26.965458Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T01:33:26.968531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T01:33:26.968452Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8cb6433ac2f96c64","initial-advertise-peer-urls":["https://172.20.54.215:2380"],"listen-peer-urls":["https://172.20.54.215:2380"],"advertise-client-urls":["https://172.20.54.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.54.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T01:33:28.566049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.566985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.567082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 received MsgPreVoteResp from 8cb6433ac2f96c64 at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.567147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 received MsgVoteResp from 8cb6433ac2f96c64 at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8cb6433ac2f96c64 elected leader 8cb6433ac2f96c64 at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.574147Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8cb6433ac2f96c64","local-member-attributes":"{Name:pause-549000 ClientURLs:[https://172.20.54.215:2379]}","request-path":"/0/members/8cb6433ac2f96c64/attributes","cluster-id":"f11ddc63fc62bb97","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T01:33:28.574197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T01:33:28.584696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T01:33:28.587834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T01:33:28.587975Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T01:33:28.590586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T01:33:28.599096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.54.215:2379"}
	{"level":"info","ts":"2024-03-08T01:33:34.996562Z","caller":"traceutil/trace.go:171","msg":"trace[1486148562] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"270.66915ms","start":"2024-03-08T01:33:34.725866Z","end":"2024-03-08T01:33:34.996535Z","steps":["trace[1486148562] 'process raft request'  (duration: 270.280648ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:34:42 up 11 min,  0 users,  load average: 0.61, 0.57, 0.29
	Linux pause-549000 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70615601747b] <==
	I0308 01:33:30.834417       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 01:33:30.834672       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 01:33:30.834927       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 01:33:30.964856       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 01:33:30.978557       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 01:33:30.980994       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 01:33:30.981032       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 01:33:30.984530       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 01:33:30.984598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 01:33:30.990578       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 01:33:30.991187       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 01:33:31.034957       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 01:33:31.035246       1 aggregator.go:166] initial CRD sync complete...
	I0308 01:33:31.035439       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 01:33:31.035623       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 01:33:31.035810       1 cache.go:39] Caches are synced for autoregister controller
	I0308 01:33:31.691172       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 01:33:32.160037       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.20.54.215]
	I0308 01:33:32.162240       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 01:33:32.172192       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 01:33:32.843774       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 01:33:32.908752       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 01:33:33.158417       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 01:33:33.224947       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 01:33:33.260353       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [a3aed9f888fa] <==
	I0308 01:33:04.114485       1 server.go:148] Version: v1.28.4
	I0308 01:33:04.114549       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0308 01:33:04.991400       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 01:33:04.991758       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0308 01:33:04.993294       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 01:33:05.000490       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0308 01:33:05.000509       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0308 01:33:05.000705       1 instance.go:298] Using reconciler: lease
	W0308 01:33:05.002437       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:05.992865       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:05.994630       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:06.003960       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.323122       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.672864       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.861575       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.428196       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.916772       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.932475       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:13.494746       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:13.884531       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:14.058513       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:19.313464       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:20.380166       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:21.289043       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0308 01:33:25.002835       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [15d872b3f05b] <==
	I0308 01:33:43.958734       1 shared_informer.go:318] Caches are synced for crt configmap
	I0308 01:33:43.959079       1 shared_informer.go:318] Caches are synced for node
	I0308 01:33:43.960037       1 range_allocator.go:174] "Sending events to api server"
	I0308 01:33:43.960493       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0308 01:33:43.960628       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0308 01:33:43.960643       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0308 01:33:43.962915       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0308 01:33:43.969883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0308 01:33:43.971517       1 shared_informer.go:318] Caches are synced for PV protection
	I0308 01:33:43.978132       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0308 01:33:43.986592       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0308 01:33:43.986731       1 shared_informer.go:318] Caches are synced for endpoint
	I0308 01:33:43.992750       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0308 01:33:44.023813       1 shared_informer.go:318] Caches are synced for cronjob
	I0308 01:33:44.023993       1 shared_informer.go:318] Caches are synced for job
	I0308 01:33:44.031467       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0308 01:33:44.031998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="253.301µs"
	I0308 01:33:44.039068       1 shared_informer.go:318] Caches are synced for disruption
	I0308 01:33:44.039305       1 shared_informer.go:318] Caches are synced for deployment
	I0308 01:33:44.045652       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 01:33:44.054472       1 shared_informer.go:318] Caches are synced for stateful set
	I0308 01:33:44.117390       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 01:33:44.510943       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 01:33:44.511553       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 01:33:44.514962       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [ca0870d599f1] <==
	
	
	==> kube-proxy [1650ae73fce3] <==
	
	
	==> kube-proxy [fbd8e6022b6d] <==
	I0308 01:33:33.445970       1 server_others.go:69] "Using iptables proxy"
	I0308 01:33:33.531068       1 node.go:141] Successfully retrieved node IP: 172.20.54.215
	I0308 01:33:33.625780       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 01:33:33.626023       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 01:33:33.630589       1 server_others.go:152] "Using iptables Proxier"
	I0308 01:33:33.631050       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 01:33:33.632240       1 server.go:846] "Version info" version="v1.28.4"
	I0308 01:33:33.632769       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 01:33:33.634252       1 config.go:188] "Starting service config controller"
	I0308 01:33:33.634409       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 01:33:33.635061       1 config.go:97] "Starting endpoint slice config controller"
	I0308 01:33:33.635103       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 01:33:33.635886       1 config.go:315] "Starting node config controller"
	I0308 01:33:33.635919       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 01:33:33.735854       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 01:33:33.735954       1 shared_informer.go:318] Caches are synced for service config
	I0308 01:33:33.736436       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0fe3021a276b] <==
	I0308 01:33:01.480806       1 serving.go:348] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [be9eaddf3ddc] <==
	W0308 01:33:30.896009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 01:33:30.896046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 01:33:30.896124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 01:33:30.896163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 01:33:30.896240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.896257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.896358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 01:33:30.896377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 01:33:30.896456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.896495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.896586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 01:33:30.896637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 01:33:30.896720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 01:33:30.896756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 01:33:30.901422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 01:33:30.901467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 01:33:30.901644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 01:33:30.901733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 01:33:30.901913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.901997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.903067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 01:33:30.903121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 01:33:30.903138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.903148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0308 01:33:32.785495       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 01:33:30 pause-549000 kubelet[10371]: I0308 01:33:30.925982   10371 topology_manager.go:215] "Topology Admit Handler" podUID="ff75380d-e287-4d97-bd11-67036d795d5a" podNamespace="kube-system" podName="kube-proxy-z8xr2"
	Mar 08 01:33:30 pause-549000 kubelet[10371]: I0308 01:33:30.937069   10371 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.946476   10371 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.946551   10371 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.946637   10371 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.946694   10371 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.952989   10371 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.953037   10371 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.050392   10371 kubelet_node_status.go:108] "Node was previously registered" node="pause-549000"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.050937   10371 kubelet_node_status.go:73] "Successfully registered node" node="pause-549000"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.059029   10371 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.060367   10371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff75380d-e287-4d97-bd11-67036d795d5a-lib-modules\") pod \"kube-proxy-z8xr2\" (UID: \"ff75380d-e287-4d97-bd11-67036d795d5a\") " pod="kube-system/kube-proxy-z8xr2"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.064394   10371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff75380d-e287-4d97-bd11-67036d795d5a-xtables-lock\") pod \"kube-proxy-z8xr2\" (UID: \"ff75380d-e287-4d97-bd11-67036d795d5a\") " pod="kube-system/kube-proxy-z8xr2"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.066607   10371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.066301   10371 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.067173   10371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff75380d-e287-4d97-bd11-67036d795d5a-kube-proxy podName:ff75380d-e287-4d97-bd11-67036d795d5a nodeName:}" failed. No retries permitted until 2024-03-08 01:33:32.567094112 +0000 UTC m=+19.871234118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ff75380d-e287-4d97-bd11-67036d795d5a-kube-proxy") pod "kube-proxy-z8xr2" (UID: "ff75380d-e287-4d97-bd11-67036d795d5a") : failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.066921   10371 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.067273   10371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6d1c69d-3975-46dc-b037-11d53142d1f1-config-volume podName:f6d1c69d-3975-46dc-b037-11d53142d1f1 nodeName:}" failed. No retries permitted until 2024-03-08 01:33:32.567244413 +0000 UTC m=+19.871384419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6d1c69d-3975-46dc-b037-11d53142d1f1-config-volume") pod "coredns-5dd5756b68-2q5bn" (UID: "f6d1c69d-3975-46dc-b037-11d53142d1f1") : failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: I0308 01:33:32.728535   10371 scope.go:117] "RemoveContainer" containerID="1650ae73fce37e29f70126d4d3083c38fe8bd2e6c0d46b2fb8a9a3e885b5c364"
	Mar 08 01:33:32 pause-549000 kubelet[10371]: I0308 01:33:32.729581   10371 scope.go:117] "RemoveContainer" containerID="6cbd157ab876a94115260ab401f8c0813ec91011ef37275b333805e994dc04d9"
	Mar 08 01:33:49 pause-549000 kubelet[10371]: I0308 01:33:49.318431   10371 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Mar 08 01:33:49 pause-549000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Mar 08 01:33:49 pause-549000 systemd[1]: kubelet.service: Deactivated successfully.
	Mar 08 01:33:49 pause-549000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 08 01:33:49 pause-549000 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:34:23.619689    7952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-549000 -n pause-549000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-549000 -n pause-549000: exit status 2 (14.3712513s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:34:46.959112    9036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-549000" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-549000 -n pause-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-549000 -n pause-549000: exit status 2 (13.5469534s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:35:01.310908    2268 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-549000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-549000 logs -n 25: (24.3802418s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| pause   | -p pause-549000                                      | pause-549000   | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                               | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:33 UTC |
	|         | systemctl cat docker                                 |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo containerd                       | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | config dump                                          |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                            | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/nsswitch.conf                                   |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                           | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:33 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl                        | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | status crio --all --full                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| unpause | -p pause-549000                                      | pause-549000   | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	|         | --alsologtostderr -v=5                               |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                            | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/hosts                                           |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo docker                        | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo systemctl                        | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | cat crio --no-pager                                  |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo cat                            | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/resolv.conf                                     |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                               | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | systemctl status cri-docker                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo find                             | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo crictl                         | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | pods                                                 |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo                               | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | systemctl cat cri-docker                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p auto-503300 sudo crio                             | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | config                                               |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo crictl                         | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | ps --all                                             |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                           | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:34 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	| delete  | -p auto-503300                                       | auto-503300    | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC |                     |
	| ssh     | -p calico-503300 sudo find                           | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:35 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p kindnet-503300 sudo cat                           | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:34 UTC | 08 Mar 24 01:35 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo ip a s                         | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:35 UTC | 08 Mar 24 01:35 UTC |
	| ssh     | -p kindnet-503300 sudo                               | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:35 UTC | 08 Mar 24 01:35 UTC |
	|         | cri-dockerd --version                                |                |                   |         |                     |                     |
	| ssh     | -p calico-503300 sudo ip r s                         | calico-503300  | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:35 UTC |                     |
	| ssh     | -p kindnet-503300 sudo                               | kindnet-503300 | minikube7\jenkins | v1.32.0 | 08 Mar 24 01:35 UTC |                     |
	|         | systemctl status containerd                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/08 01:25:41
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0308 01:25:41.648436    3724 out.go:291] Setting OutFile to fd 1892 ...
	I0308 01:25:41.649266    3724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:25:41.649365    3724 out.go:304] Setting ErrFile to fd 1800...
	I0308 01:25:41.649365    3724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 01:25:41.673513    3724 out.go:298] Setting JSON to false
	I0308 01:25:41.676785    3724 start.go:129] hostinfo: {"hostname":"minikube7","uptime":20095,"bootTime":1709841045,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0308 01:25:41.676785    3724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0308 01:25:41.682873    3724 out.go:177] * [pause-549000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0308 01:25:41.685437    3724 notify.go:220] Checking for updates...
	I0308 01:25:41.687564    3724 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:25:41.691498    3724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0308 01:25:41.694320    3724 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0308 01:25:41.696224    3724 out.go:177]   - MINIKUBE_LOCATION=16214
	I0308 01:25:41.699946    3724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0308 01:25:38.570183    3532 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:25:38.570183    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:39.580020    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:41.647645    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:41.647645    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:41.647887    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:41.703851    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:25:41.704909    3724 driver.go:392] Setting default libvirt URI to qemu:///system
	I0308 01:25:46.757943    3724 out.go:177] * Using the hyperv driver based on existing profile
	I0308 01:25:46.761217    3724 start.go:297] selected driver: hyperv
	I0308 01:25:46.761217    3724 start.go:901] validating driver "hyperv" against &{Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:25:46.761785    3724 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0308 01:25:46.810215    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:25:46.810318    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:25:46.810506    3724 start.go:340] cluster config:
	{Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:25:46.810506    3724 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0308 01:25:46.814959    3724 out.go:177] * Starting "pause-549000" primary control-plane node in "pause-549000" cluster
	I0308 01:25:44.094507    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:44.094507    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:44.107203    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:46.106633    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:46.113575    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:46.113720    3532 machine.go:94] provisionDockerMachine start ...
	I0308 01:25:46.113865    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:46.818394    3724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:25:46.818634    3724 preload.go:147] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0308 01:25:46.818634    3724 cache.go:56] Caching tarball of preloaded images
	I0308 01:25:46.818973    3724 preload.go:173] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0308 01:25:46.819135    3724 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0308 01:25:46.819403    3724 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\config.json ...
	I0308 01:25:46.821895    3724 start.go:360] acquireMachinesLock for pause-549000: {Name:mk814f158b6187cc9297257c36fdbe0d2871c950 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0308 01:25:48.073857    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:48.081473    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:48.081473    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:50.304089    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:50.304089    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:50.309154    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:50.309799    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:50.309799    3532 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:25:50.429161    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:25:50.429338    3532 buildroot.go:166] provisioning hostname "auto-503300"
	I0308 01:25:50.429416    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:52.317819    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:52.319149    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:52.319149    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:54.570730    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:54.570819    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:54.575957    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:54.576851    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:54.576917    3532 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-503300 && echo "auto-503300" | sudo tee /etc/hostname
	I0308 01:25:54.722371    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-503300
	
	I0308 01:25:54.722477    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:56.611604    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:25:58.884745    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:25:58.884745    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:25:58.895116    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:25:58.895116    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:25:58.895116    3532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:25:59.033838    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:25:59.033903    3532 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:25:59.033955    3532 buildroot.go:174] setting up certificates
	I0308 01:25:59.034022    3532 provision.go:84] configureAuth start
	I0308 01:25:59.034071    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:00.917365    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:00.929050    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:00.929050    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:03.161786    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:03.172673    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:03.172779    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:05.065907    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:05.065907    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:05.066160    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:07.281486    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:07.281486    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:07.291884    3532 provision.go:143] copyHostCerts
	I0308 01:26:07.292292    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:26:07.292549    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:26:07.293058    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:26:07.294310    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:26:07.294394    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:26:07.294993    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:26:07.296479    3532 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:26:07.296479    3532 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:26:07.296704    3532 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:26:07.297691    3532 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-503300 san=[127.0.0.1 172.20.53.54 auto-503300 localhost minikube]
	I0308 01:26:07.436045    3532 provision.go:177] copyRemoteCerts
	I0308 01:26:07.446321    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:26:07.446321    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:09.325949    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:09.326122    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:09.326204    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:11.589156    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:11.599294    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:11.599733    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:11.701896    3532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2555353s)
	I0308 01:26:11.702135    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:26:11.744150    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0308 01:26:11.784961    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:26:11.834318    3532 provision.go:87] duration metric: took 12.8001083s to configureAuth
	I0308 01:26:11.834385    3532 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:26:11.834385    3532 config.go:182] Loaded profile config "auto-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:26:11.834385    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:13.695320    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:13.695471    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:13.695542    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:15.957966    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:15.972484    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:15.978411    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:15.979137    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:15.979137    3532 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:26:16.100759    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:26:16.100825    3532 buildroot.go:70] root file system type: tmpfs
	I0308 01:26:16.100825    3532 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:26:16.100825    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:17.921771    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:17.921771    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:17.934024    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:20.149980    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:20.159608    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:20.164834    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:20.164834    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:20.165419    3532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:26:20.305681    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:26:20.305681    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:22.179414    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:22.179414    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:22.188959    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:24.435390    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:24.435680    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:24.440190    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:24.440851    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:24.440851    3532 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:26:25.579859    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:26:25.579926    3532 machine.go:97] duration metric: took 39.4658385s to provisionDockerMachine
	I0308 01:26:25.579926    3532 client.go:171] duration metric: took 1m48.1504169s to LocalClient.Create
	I0308 01:26:25.579984    3532 start.go:167] duration metric: took 1m48.1504748s to libmachine.API.Create "auto-503300"
	I0308 01:26:25.579984    3532 start.go:293] postStartSetup for "auto-503300" (driver="hyperv")
	I0308 01:26:25.580033    3532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:26:25.591025    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:26:25.591025    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:27.500771    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:27.500771    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:27.511285    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:29.792928    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:29.792928    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:29.803254    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:29.901407    3532 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3102672s)
	I0308 01:26:29.912962    3532 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:26:29.919625    3532 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:26:29.919724    3532 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:26:29.920215    3532 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:26:29.921135    3532 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:26:29.929690    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:26:29.950481    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:26:29.991809    3532 start.go:296] duration metric: took 4.4117844s for postStartSetup
	I0308 01:26:29.994691    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:31.854356    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:31.854356    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:31.864683    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:34.101323    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:34.101323    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:34.101323    3532 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\config.json ...
	I0308 01:26:34.105074    3532 start.go:128] duration metric: took 1m56.6801618s to createHost
	I0308 01:26:34.105074    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:35.997146    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:35.997146    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:35.998272    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:42.551919    4296 start.go:364] duration metric: took 2m39.3659037s to acquireMachinesLock for "kindnet-503300"
	I0308 01:26:42.551919    4296 start.go:93] Provisioning new machine with config: &{Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:26:42.552603    4296 start.go:125] createHost starting for "" (driver="hyperv")
	I0308 01:26:38.220751    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:38.220751    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:38.230426    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:38.230517    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:38.230517    3532 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:26:38.348059    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861198.352439443
	
	I0308 01:26:38.348150    3532 fix.go:216] guest clock: 1709861198.352439443
	I0308 01:26:38.348150    3532 fix.go:229] Guest: 2024-03-08 01:26:38.352439443 +0000 UTC Remote: 2024-03-08 01:26:34.1050742 +0000 UTC m=+291.229849901 (delta=4.247365243s)
	I0308 01:26:38.348272    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:40.192711    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:42.405108    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:42.415591    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:42.420733    3532 main.go:141] libmachine: Using SSH client type: native
	I0308 01:26:42.420733    3532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.53.54 22 <nil> <nil>}
	I0308 01:26:42.420733    3532 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861198
	I0308 01:26:42.551023    3532 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:26:38 UTC 2024
	
	I0308 01:26:42.551570    3532 fix.go:236] clock set: Fri Mar  8 01:26:38 UTC 2024
	 (err=<nil>)
	I0308 01:26:42.551570    3532 start.go:83] releasing machines lock for "auto-503300", held for 2m5.1275299s
	I0308 01:26:42.551889    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:42.556793    4296 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0308 01:26:42.557469    4296 start.go:159] libmachine.API.Create for "kindnet-503300" (driver="hyperv")
	I0308 01:26:42.557469    4296 client.go:168] LocalClient.Create starting
	I0308 01:26:42.558595    4296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 01:26:42.558864    4296 main.go:141] libmachine: Decoding PEM data...
	I0308 01:26:42.558864    4296 main.go:141] libmachine: Parsing certificate...
	I0308 01:26:42.559144    4296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 01:26:42.559351    4296 main.go:141] libmachine: Decoding PEM data...
	I0308 01:26:42.559455    4296 main.go:141] libmachine: Parsing certificate...
	I0308 01:26:42.559543    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 01:26:44.344188    4296 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 01:26:44.344188    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:44.344288    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 01:26:45.968323    4296 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 01:26:45.968323    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:45.977620    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:26:44.540626    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:44.551551    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:44.551551    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:46.941916    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:46.941916    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:46.959646    3532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:26:46.959758    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:46.971478    3532 ssh_runner.go:195] Run: cat /version.json
	I0308 01:26:46.971478    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:47.425959    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:26:50.955311    4296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:26:50.955311    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:50.969252    4296 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 01:26:49.101410    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:49.101410    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:49.101675    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:49.104886    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:26:49.105096    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:49.105096    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:26:51.591560    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:51.597617    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:51.598236    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:51.643499    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:26:51.643681    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:51.643935    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:26:51.789656    3532 ssh_runner.go:235] Completed: cat /version.json: (4.8181335s)
	I0308 01:26:51.789656    3532 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.8298536s)
	I0308 01:26:51.809707    3532 ssh_runner.go:195] Run: systemctl --version
	I0308 01:26:51.831674    3532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:26:51.841472    3532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:26:51.854794    3532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:26:51.882894    3532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:26:51.882965    3532 start.go:494] detecting cgroup driver to use...
	I0308 01:26:51.883337    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:26:51.930545    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:26:51.960798    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:26:51.978215    3532 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:26:51.990296    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:26:52.024128    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:26:52.055152    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:26:52.083923    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:26:52.120146    3532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:26:52.152960    3532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:26:52.186885    3532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:26:52.215923    3532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:26:52.243740    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:52.442704    3532 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:26:52.469884    3532 start.go:494] detecting cgroup driver to use...
	I0308 01:26:52.483638    3532 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:26:52.516887    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:26:52.551980    3532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:26:52.597700    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:26:52.631139    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:26:52.664782    3532 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:26:52.854141    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:26:52.880631    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:26:52.926338    3532 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:26:52.943820    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:26:52.948063    3532 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:26:53.001144    3532 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:26:51.389638    4296 main.go:141] libmachine: Creating SSH key...
	I0308 01:26:51.707284    4296 main.go:141] libmachine: Creating VM...
	I0308 01:26:51.707284    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:26:54.504111    4296 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:26:54.504169    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:54.504274    4296 main.go:141] libmachine: Using switch "Default Switch"
	I0308 01:26:54.504274    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:26:56.163411    4296 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:26:56.163411    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:26:56.168332    4296 main.go:141] libmachine: Creating VHD
	I0308 01:26:56.168332    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 01:26:53.186556    3532 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:26:53.356084    3532 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:26:53.356287    3532 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:26:53.396712    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:53.579319    3532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:26:55.183946    3532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6046124s)
	I0308 01:26:55.198920    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:26:55.239542    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:26:55.273924    3532 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:26:55.458065    3532 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:26:55.637833    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:55.831707    3532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:26:55.870223    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:26:55.903532    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:26:56.082831    3532 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:26:56.188829    3532 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:26:56.200642    3532 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:26:56.210168    3532 start.go:562] Will wait 60s for crictl version
	I0308 01:26:56.221773    3532 ssh_runner.go:195] Run: which crictl
	I0308 01:26:56.238556    3532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:26:56.307942    3532 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:26:56.320124    3532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:26:56.363490    3532 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:26:56.393244    3532 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:26:56.393329    3532 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:26:56.397931    3532 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:26:56.400546    3532 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:26:56.400546    3532 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:26:56.405190    3532 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:26:56.415198    3532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:26:56.434162    3532 kubeadm.go:877] updating cluster {Name:auto-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:26:56.434467    3532 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:26:56.442940    3532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:26:56.467020    3532 docker.go:685] Got preloaded images: 
	I0308 01:26:56.467083    3532 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:26:56.479517    3532 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:26:56.513373    3532 ssh_runner.go:195] Run: which lz4
	I0308 01:26:56.530824    3532 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:26:56.539522    3532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:26:56.539738    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:27:00.311883    4296 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A9FD6913-AAF3-4A6E-AF4C-D0C0425612C6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 01:27:00.311883    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:00.311883    4296 main.go:141] libmachine: Writing magic tar header
	I0308 01:27:00.312112    4296 main.go:141] libmachine: Writing SSH key tar header
	I0308 01:27:00.321396    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 01:26:59.368764    3532 docker.go:649] duration metric: took 2.8482129s to copy over tarball
	I0308 01:26:59.380291    3532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:27:03.427555    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:03.427804    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:03.427905    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd' -SizeBytes 20000MB
	I0308 01:27:05.857014    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:05.857014    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:05.868789    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0308 01:27:08.384095    3532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0037203s)
	I0308 01:27:08.384202    3532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:27:08.449832    3532 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:27:08.467051    3532 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:27:08.507707    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:08.681148    3532 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:27:12.625747    3532 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.944562s)
	I0308 01:27:12.635611    3532 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:27:12.661404    3532 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:27:12.661404    3532 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:27:12.661404    3532 kubeadm.go:928] updating node { 172.20.53.54 8443 v1.28.4 docker true true} ...
	I0308 01:27:12.662103    3532 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.53.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 01:27:12.673296    3532 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:27:12.708893    3532 cni.go:84] Creating CNI manager for ""
	I0308 01:27:12.708893    3532 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:27:12.708893    3532 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:27:12.708893    3532 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.53.54 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-503300 NodeName:auto-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.53.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.53.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:27:12.708893    3532 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.53.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "auto-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.53.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.53.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:27:12.721494    3532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:27:12.738504    3532 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:27:12.749986    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:27:12.767775    3532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0308 01:27:12.805442    3532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:27:12.839497    3532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0308 01:27:12.881599    3532 ssh_runner.go:195] Run: grep 172.20.53.54	control-plane.minikube.internal$ /etc/hosts
	I0308 01:27:12.887340    3532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.53.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:27:12.920951    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:12.378178    4296 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kindnet-503300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 01:27:12.378178    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:12.388950    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kindnet-503300 -DynamicMemoryEnabled $false
	I0308 01:27:14.476607    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:14.480074    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:14.480074    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kindnet-503300 -Count 2
	I0308 01:27:13.098573    3532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:27:13.125907    3532 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300 for IP: 172.20.53.54
	I0308 01:27:13.125940    3532 certs.go:194] generating shared ca certs ...
	I0308 01:27:13.126013    3532 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.126857    3532 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:27:13.126857    3532 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:27:13.127469    3532 certs.go:256] generating profile certs ...
	I0308 01:27:13.127539    3532 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key
	I0308 01:27:13.128243    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt with IP's: []
	I0308 01:27:13.222869    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt ...
	I0308 01:27:13.222869    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.crt: {Name:mkeb0f2a5bb3f618f1dbc02834bfc5e591282511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.228962    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key ...
	I0308 01:27:13.228962    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\client.key: {Name:mk5338162e9bf0bf00676d94964732c038c1a4b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.230050    3532 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204
	I0308 01:27:13.231047    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.53.54]
	I0308 01:27:13.481754    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 ...
	I0308 01:27:13.481754    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204: {Name:mkdd377018daa63db316fc4bfd5fccd0e26c6cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.484387    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204 ...
	I0308 01:27:13.484387    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204: {Name:mk751f3cb046c28b55774fe3f2e77a7914e57f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.485610    3532 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt.ca257204 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt
	I0308 01:27:13.492333    3532 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key.ca257204 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key
	I0308 01:27:13.497667    3532 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key
	I0308 01:27:13.497667    3532 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt with IP's: []
	I0308 01:27:13.885026    3532 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt ...
	I0308 01:27:13.885026    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt: {Name:mk07683b3a954eb0e4f56863772cd562f8cd650a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.887752    3532 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key ...
	I0308 01:27:13.887752    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key: {Name:mkf374e7b1feb909e230f0b0cb195580f35df7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:13.895576    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:27:13.899274    3532 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:27:13.899393    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:27:13.899516    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:27:13.900294    3532 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:27:13.900874    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:27:13.938763    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:27:13.975550    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:27:14.016887    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:27:14.063454    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0308 01:27:14.109515    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:27:14.152202    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:27:14.195549    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 01:27:14.236790    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:27:14.278499    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:27:14.322975    3532 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:27:14.364522    3532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:27:14.408589    3532 ssh_runner.go:195] Run: openssl version
	I0308 01:27:14.427078    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:27:14.455300    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.462002    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.473713    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:27:14.493785    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:27:14.525350    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:27:14.553792    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.561176    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.575126    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:27:14.596353    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:27:14.632677    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:27:14.669700    3532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.676507    3532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.687616    3532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:27:14.708929    3532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:27:14.739180    3532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:27:14.746304    3532 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:27:14.746304    3532 kubeadm.go:391] StartCluster: {Name:auto-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:auto-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:27:14.757307    3532 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:27:14.790344    3532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:27:14.817904    3532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:27:14.845897    3532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:27:14.863181    3532 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:27:14.863301    3532 kubeadm.go:156] found existing configuration files:
	
	I0308 01:27:14.878686    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:27:14.894806    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:27:14.908495    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:27:14.937231    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:27:14.954020    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:27:14.965712    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:27:14.993719    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:27:15.009400    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:27:15.021116    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:27:15.051778    3532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:27:15.072742    3532 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:27:15.084149    3532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:27:15.099405    3532 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:27:15.350076    3532 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:27:16.484576    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:16.484576    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:16.493721    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\boot2docker.iso'
	I0308 01:27:18.830681    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:18.837884    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:18.837884    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kindnet-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\disk.vhd'
	I0308 01:27:21.243714    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:21.253736    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:21.253736    4296 main.go:141] libmachine: Starting VM...
	I0308 01:27:21.253736    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kindnet-503300
	I0308 01:27:24.203591    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:24.204067    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:24.204067    4296 main.go:141] libmachine: Waiting for host to start...
	I0308 01:27:24.204171    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:30.422498    3532 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:27:30.422498    3532 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:27:30.422498    3532 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:27:30.423024    3532 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:27:30.423330    3532 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:27:30.423559    3532 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:27:30.427730    3532 out.go:204]   - Generating certificates and keys ...
	I0308 01:27:30.428392    3532 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:27:30.428572    3532 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:27:30.428614    3532 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-503300 localhost] and IPs [172.20.53.54 127.0.0.1 ::1]
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:27:30.429791    3532 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-503300 localhost] and IPs [172.20.53.54 127.0.0.1 ::1]
	I0308 01:27:30.430440    3532 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:27:30.430467    3532 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:27:30.431095    3532 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:27:30.431307    3532 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:27:30.431612    3532 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:27:30.431731    3532 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:27:30.431731    3532 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:27:30.432614    3532 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:27:30.435537    3532 out.go:204]   - Booting up control plane ...
	I0308 01:27:30.435537    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:27:30.435537    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:27:30.436084    3532 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:27:30.436241    3532 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:27:30.436910    3532 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:27:30.436910    3532 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.004135 seconds
	I0308 01:27:30.437699    3532 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:27:30.437992    3532 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:27:30.437992    3532 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:27:30.438654    3532 kubeadm.go:309] [mark-control-plane] Marking the node auto-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:27:30.438654    3532 kubeadm.go:309] [bootstrap-token] Using token: jux1em.0cf7kc2zweaoxk1n
	I0308 01:27:30.441564    3532 out.go:204]   - Configuring RBAC rules ...
	I0308 01:27:30.442370    3532 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:27:30.442370    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:27:30.442913    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:27:30.443140    3532 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:27:30.444221    3532 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:27:30.444418    3532 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:27:30.444680    3532 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:27:30.444680    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:27:30.444959    3532 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:27:30.444959    3532 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.444959    3532 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:27:30.444959    3532 kubeadm.go:309] 
	I0308 01:27:30.446045    3532 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:27:30.446165    3532 kubeadm.go:309] 
	I0308 01:27:30.446335    3532 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:27:30.446611    3532 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:27:30.446871    3532 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:27:30.446871    3532 kubeadm.go:309] 
	I0308 01:27:30.446871    3532 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:27:30.446871    3532 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:27:30.446871    3532 kubeadm.go:309] 
	I0308 01:27:30.446871    3532 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jux1em.0cf7kc2zweaoxk1n \
	I0308 01:27:30.446871    3532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:27:30.448049    3532 kubeadm.go:309] 	--control-plane 
	I0308 01:27:30.448049    3532 kubeadm.go:309] 
	I0308 01:27:30.448429    3532 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:27:30.448492    3532 kubeadm.go:309] 
	I0308 01:27:30.448874    3532 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jux1em.0cf7kc2zweaoxk1n \
	I0308 01:27:30.449441    3532 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:27:30.449552    3532 cni.go:84] Creating CNI manager for ""
	I0308 01:27:30.449616    3532 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:27:30.454403    3532 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 01:27:26.351354    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:26.361890    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:26.362007    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:28.778270    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:28.778354    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:29.780011    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:30.471227    3532 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 01:27:30.496842    3532 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 01:27:30.545705    3532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:27:30.559124    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:30.561705    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-503300 minikube.k8s.io/updated_at=2024_03_08T01_27_30_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=auto-503300 minikube.k8s.io/primary=true
	I0308 01:27:30.592407    3532 ops.go:34] apiserver oom_adj: -16
	I0308 01:27:30.964414    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.479176    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.969034    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:32.472158    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:32.977294    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:31.885018    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:34.261075    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:34.261075    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:35.270312    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:33.475854    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:33.972280    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:34.465540    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:34.977719    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:35.475118    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:35.973158    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:36.468510    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:36.978878    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.475791    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.970083    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:37.336225    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:37.336225    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:37.336810    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:39.721931    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:39.721931    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:40.727603    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:38.474575    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:38.964719    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:39.477020    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:39.971315    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:40.464227    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:40.973582    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:41.471265    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:41.973781    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:42.479283    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:42.975951    3532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:27:43.111543    3532 kubeadm.go:1106] duration metric: took 12.5657197s to wait for elevateKubeSystemPrivileges
	W0308 01:27:43.111543    3532 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:27:43.111543    3532 kubeadm.go:393] duration metric: took 28.3649725s to StartCluster
	I0308 01:27:43.111543    3532 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:43.111543    3532 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:27:43.114116    3532 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:27:43.115286    3532 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.53.54 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:27:43.115286    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:27:43.121372    3532 out.go:177] * Verifying Kubernetes components...
	I0308 01:27:43.115914    3532 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:27:43.121372    3532 addons.go:69] Setting storage-provisioner=true in profile "auto-503300"
	I0308 01:27:43.117750    3532 config.go:182] Loaded profile config "auto-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:27:43.125082    3532 addons.go:234] Setting addon storage-provisioner=true in "auto-503300"
	I0308 01:27:43.121372    3532 addons.go:69] Setting default-storageclass=true in profile "auto-503300"
	I0308 01:27:43.125082    3532 host.go:66] Checking if "auto-503300" exists ...
	I0308 01:27:43.125082    3532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-503300"
	I0308 01:27:43.126139    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:43.127852    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:43.141961    3532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:27:43.513829    3532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:27:43.525499    3532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:27:45.518019    3532 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9924561s)
	I0308 01:27:45.518133    3532 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.0042429s)
	I0308 01:27:45.518279    3532 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:27:45.523362    3532 node_ready.go:35] waiting up to 15m0s for node "auto-503300" to be "Ready" ...
	I0308 01:27:45.564956    3532 node_ready.go:49] node "auto-503300" has status "Ready":"True"
	I0308 01:27:45.565051    3532 node_ready.go:38] duration metric: took 41.6886ms for node "auto-503300" to be "Ready" ...
	I0308 01:27:45.565126    3532 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:27:45.594681    3532 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace to be "Ready" ...
	I0308 01:27:45.622719    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:45.628028    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.631172    3532 addons.go:234] Setting addon default-storageclass=true in "auto-503300"
	I0308 01:27:45.631802    3532 host.go:66] Checking if "auto-503300" exists ...
	I0308 01:27:45.632843    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:45.732738    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:45.737652    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.740578    3532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:42.853956    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:45.757539    4296 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:27:45.761419    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:45.742765    3532 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:27:45.742765    3532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:27:45.743471    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:46.031742    3532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-503300" context rescaled to 1 replicas
	I0308 01:27:47.618270    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:47.846952    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:47.862309    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:47.862397    3532 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:27:47.862397    3532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:27:47.862397    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-503300 ).state
	I0308 01:27:48.007901    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:48.007901    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:48.012351    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:46.768629    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:49.112246    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:49.112728    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:49.112815    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:49.622746    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:50.083298    3532 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:50.088206    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:50.088287    3532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:50.673550    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:27:50.673550    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:50.673995    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:27:50.818232    3532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:27:52.126626    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:52.609267    3532 main.go:141] libmachine: [stdout =====>] : 172.20.53.54
	
	I0308 01:27:52.609267    3532 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:52.623979    3532 sshutil.go:53] new ssh client: &{IP:172.20.53.54 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-503300\id_rsa Username:docker}
	I0308 01:27:52.768959    3532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:27:53.004052    3532 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:27:53.006594    3532 addons.go:505] duration metric: took 9.8905869s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:27:51.658272    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:27:51.658336    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:51.658399    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:53.693043    4296 machine.go:94] provisionDockerMachine start ...
	I0308 01:27:53.693043    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:27:55.640974    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:27:55.651454    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:55.651454    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:54.614297    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:57.113671    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:27:58.007814    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:27:58.007814    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:27:58.012456    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:27:58.013303    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:27:58.013303    4296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:27:58.138761    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:27:58.138877    4296 buildroot.go:166] provisioning hostname "kindnet-503300"
	I0308 01:27:58.138877    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:00.053473    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:00.053473    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:00.064119    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:27:59.617462    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:02.112220    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:02.394797    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:02.394797    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:02.399516    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:02.400087    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:02.400151    4296 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-503300 && echo "kindnet-503300" | sudo tee /etc/hostname
	I0308 01:28:02.554465    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-503300
	
	I0308 01:28:02.554465    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:04.492241    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:04.613956    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:06.623558    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:06.805763    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:06.805763    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:06.811744    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:06.811861    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:06.811861    4296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:28:06.957977    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:28:06.957977    4296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:28:06.957977    4296 buildroot.go:174] setting up certificates
	I0308 01:28:06.957977    4296 provision.go:84] configureAuth start
	I0308 01:28:06.957977    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:08.897727    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:11.198880    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:11.208684    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:11.208684    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:08.624782    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:11.112685    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:13.165613    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:13.165613    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:13.165701    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:15.438708    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:15.450339    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:15.450339    4296 provision.go:143] copyHostCerts
	I0308 01:28:15.450525    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:28:15.450525    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:28:15.451158    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:28:15.452037    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:28:15.452037    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:28:15.452908    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:28:15.454021    4296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:28:15.454021    4296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:28:15.454021    4296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:28:15.455541    4296 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-503300 san=[127.0.0.1 172.20.59.53 kindnet-503300 localhost minikube]
	I0308 01:28:15.660535    4296 provision.go:177] copyRemoteCerts
	I0308 01:28:15.676008    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:28:15.676008    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:13.605668    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:15.616801    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:17.619864    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:17.594577    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:17.605242    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:17.605242    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:19.930601    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:19.930601    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:19.931485    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:28:20.039575    4296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3635266s)
	I0308 01:28:20.040344    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0308 01:28:20.110834    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:28:20.171515    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0308 01:28:20.216759    4296 provision.go:87] duration metric: took 13.2586572s to configureAuth
	I0308 01:28:20.216828    4296 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:28:20.217344    4296 config.go:182] Loaded profile config "kindnet-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:28:20.217457    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:20.118438    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:22.607726    3532 pod_ready.go:102] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"False"
	I0308 01:28:24.113267    3532 pod_ready.go:92] pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.113321    3532 pod_ready.go:81] duration metric: took 38.5182775s for pod "coredns-5dd5756b68-phwrk" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.113384    3532 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.118920    3532 pod_ready.go:97] error getting pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-rbjnx" not found
	I0308 01:28:24.118975    3532 pod_ready.go:81] duration metric: took 5.5907ms for pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace to be "Ready" ...
	E0308 01:28:24.119053    3532 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-rbjnx" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-rbjnx" not found
	I0308 01:28:24.119053    3532 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.128592    3532 pod_ready.go:92] pod "etcd-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.128642    3532 pod_ready.go:81] duration metric: took 9.5179ms for pod "etcd-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.128642    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.136071    3532 pod_ready.go:92] pod "kube-apiserver-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.136071    3532 pod_ready.go:81] duration metric: took 7.4291ms for pod "kube-apiserver-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.136071    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.144168    3532 pod_ready.go:92] pod "kube-controller-manager-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.144168    3532 pod_ready.go:81] duration metric: took 8.0967ms for pod "kube-controller-manager-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.144168    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-pstch" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.321007    3532 pod_ready.go:92] pod "kube-proxy-pstch" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.321109    3532 pod_ready.go:81] duration metric: took 176.9397ms for pod "kube-proxy-pstch" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.321109    3532 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.717556    3532 pod_ready.go:92] pod "kube-scheduler-auto-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:28:24.717556    3532 pod_ready.go:81] duration metric: took 396.4427ms for pod "kube-scheduler-auto-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:28:24.717665    3532 pod_ready.go:38] duration metric: took 39.152171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:28:24.717735    3532 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:28:24.730790    3532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:28:24.761106    3532 api_server.go:72] duration metric: took 41.6452137s to wait for apiserver process to appear ...
	I0308 01:28:24.761106    3532 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:28:24.761106    3532 api_server.go:253] Checking apiserver healthz at https://172.20.53.54:8443/healthz ...
	I0308 01:28:24.768097    3532 api_server.go:279] https://172.20.53.54:8443/healthz returned 200:
	ok
	I0308 01:28:24.771635    3532 api_server.go:141] control plane version: v1.28.4
	I0308 01:28:24.771635    3532 api_server.go:131] duration metric: took 10.5287ms to wait for apiserver health ...
	I0308 01:28:24.771635    3532 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:28:24.925203    3532 system_pods.go:59] 7 kube-system pods found
	I0308 01:28:24.925203    3532 system_pods.go:61] "coredns-5dd5756b68-phwrk" [f65c338e-f008-4ba8-ae07-263660851a7b] Running
	I0308 01:28:24.925728    3532 system_pods.go:61] "etcd-auto-503300" [1e75ae98-597e-4bb5-ab7b-b1a55acab24c] Running
	I0308 01:28:24.925728    3532 system_pods.go:61] "kube-apiserver-auto-503300" [e2df06cf-c573-457d-ac05-b0bc9c100ce7] Running
	I0308 01:28:24.926000    3532 system_pods.go:61] "kube-controller-manager-auto-503300" [384c0def-5b56-4d81-b8e1-5c22cfcfc666] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "kube-proxy-pstch" [b412098b-b79d-4940-af7d-3913d618242c] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "kube-scheduler-auto-503300" [941e75d4-af17-4bcd-9ae7-dd4e0e281fe7] Running
	I0308 01:28:24.926066    3532 system_pods.go:61] "storage-provisioner" [a9fcf94b-478e-496a-a649-bf2310768283] Running
	I0308 01:28:24.926066    3532 system_pods.go:74] duration metric: took 154.4295ms to wait for pod list to return data ...
	I0308 01:28:24.926066    3532 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:28:25.120077    3532 default_sa.go:45] found service account: "default"
	I0308 01:28:25.120235    3532 default_sa.go:55] duration metric: took 194.1667ms for default service account to be created ...
	I0308 01:28:25.120235    3532 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:28:25.318119    3532 system_pods.go:86] 7 kube-system pods found
	I0308 01:28:25.318119    3532 system_pods.go:89] "coredns-5dd5756b68-phwrk" [f65c338e-f008-4ba8-ae07-263660851a7b] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "etcd-auto-503300" [1e75ae98-597e-4bb5-ab7b-b1a55acab24c] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-apiserver-auto-503300" [e2df06cf-c573-457d-ac05-b0bc9c100ce7] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-controller-manager-auto-503300" [384c0def-5b56-4d81-b8e1-5c22cfcfc666] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-proxy-pstch" [b412098b-b79d-4940-af7d-3913d618242c] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "kube-scheduler-auto-503300" [941e75d4-af17-4bcd-9ae7-dd4e0e281fe7] Running
	I0308 01:28:25.318119    3532 system_pods.go:89] "storage-provisioner" [a9fcf94b-478e-496a-a649-bf2310768283] Running
	I0308 01:28:25.318119    3532 system_pods.go:126] duration metric: took 197.8822ms to wait for k8s-apps to be running ...
	I0308 01:28:25.318119    3532 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:28:25.334041    3532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:28:25.358538    3532 system_svc.go:56] duration metric: took 40.4191ms WaitForService to wait for kubelet
	I0308 01:28:25.358538    3532 kubeadm.go:576] duration metric: took 42.2428548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:28:25.358538    3532 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:28:25.522748    3532 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:28:25.522748    3532 node_conditions.go:123] node cpu capacity is 2
	I0308 01:28:25.522748    3532 node_conditions.go:105] duration metric: took 164.2077ms to run NodePressure ...
	I0308 01:28:25.522748    3532 start.go:240] waiting for startup goroutines ...
	I0308 01:28:25.522748    3532 start.go:245] waiting for cluster config update ...
	I0308 01:28:25.522748    3532 start.go:254] writing updated cluster config ...
	I0308 01:28:25.535883    3532 ssh_runner.go:195] Run: rm -f paused
	I0308 01:28:25.672414    3532 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:28:25.676771    3532 out.go:177] * Done! kubectl is now configured to use "auto-503300" cluster and "default" namespace by default
	I0308 01:28:22.126656    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:22.138978    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:22.139070    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:24.455711    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:24.455711    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:24.464696    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:24.465103    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:24.465103    4296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:28:24.599825    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:28:24.599825    4296 buildroot.go:70] root file system type: tmpfs
	I0308 01:28:24.600424    4296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:28:24.600553    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:26.612241    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:26.612241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:26.612490    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:29.079624    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:29.079624    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:29.088721    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:29.089257    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:29.089414    4296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:28:29.251092    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:28:29.251092    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:31.256377    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:33.656031    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:33.656031    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:33.671205    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:33.671840    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:33.671896    4296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:28:34.797424    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:28:34.797424    4296 machine.go:97] duration metric: took 41.1039953s to provisionDockerMachine
	I0308 01:28:34.797424    4296 client.go:171] duration metric: took 1m52.2389028s to LocalClient.Create
	I0308 01:28:34.797424    4296 start.go:167] duration metric: took 1m52.2389028s to libmachine.API.Create "kindnet-503300"
	I0308 01:28:34.797424    4296 start.go:293] postStartSetup for "kindnet-503300" (driver="hyperv")
	I0308 01:28:34.797424    4296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:28:34.810079    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:28:34.810079    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:36.852532    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:36.852532    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:36.862855    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:39.285308    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:39.285360    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:39.285782    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:28:39.392996    4296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5828732s)
	I0308 01:28:39.405336    4296 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:28:39.412176    4296 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:28:39.412287    4296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:28:39.412869    4296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:28:39.414199    4296 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:28:39.424125    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:28:39.446277    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:28:39.488095    4296 start.go:296] duration metric: took 4.6906263s for postStartSetup
	I0308 01:28:39.489850    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:41.519810    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:41.530936    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:41.531034    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:43.909931    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:43.909931    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:43.921285    4296 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\config.json ...
	I0308 01:28:43.924289    4296 start.go:128] duration metric: took 2m1.3705466s to createHost
	I0308 01:28:43.924419    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:45.870673    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:48.179331    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:48.179331    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:48.183797    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:48.184620    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:48.184620    4296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:28:48.316135    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861328.318568458
	
	I0308 01:28:48.316135    4296 fix.go:216] guest clock: 1709861328.318568458
	I0308 01:28:48.316135    4296 fix.go:229] Guest: 2024-03-08 01:28:48.318568458 +0000 UTC Remote: 2024-03-08 01:28:43.9242891 +0000 UTC m=+287.842007501 (delta=4.394279358s)
	I0308 01:28:48.316135    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:50.238448    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:50.242936    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:50.242936    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:52.847972   14284 start.go:364] duration metric: took 4m42.5171573s to acquireMachinesLock for "calico-503300"
	I0308 01:28:52.848148   14284 start.go:93] Provisioning new machine with config: &{Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:28:52.848703   14284 start.go:125] createHost starting for "" (driver="hyperv")
	I0308 01:28:52.855026   14284 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0308 01:28:52.855464   14284 start.go:159] libmachine.API.Create for "calico-503300" (driver="hyperv")
	I0308 01:28:52.855464   14284 client.go:168] LocalClient.Create starting
	I0308 01:28:52.856125   14284 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0308 01:28:52.856125   14284 main.go:141] libmachine: Decoding PEM data...
	I0308 01:28:52.856799   14284 main.go:141] libmachine: Parsing certificate...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Decoding PEM data...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: Parsing certificate...
	I0308 01:28:52.856948   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0308 01:28:54.706282   14284 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0308 01:28:54.706350   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:54.706415   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0308 01:28:52.682059    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:52.682059    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:52.698526    4296 main.go:141] libmachine: Using SSH client type: native
	I0308 01:28:52.699103    4296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.59.53 22 <nil> <nil>}
	I0308 01:28:52.699103    4296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861328
	I0308 01:28:52.847498    4296 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:28:48 UTC 2024
	
	I0308 01:28:52.847498    4296 fix.go:236] clock set: Fri Mar  8 01:28:48 UTC 2024
	 (err=<nil>)
	I0308 01:28:52.847498    4296 start.go:83] releasing machines lock for "kindnet-503300", held for 2m10.2943561s
	I0308 01:28:52.847845    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:54.876884    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:54.887303    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:54.887381    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:56.367257   14284 main.go:141] libmachine: [stdout =====>] : False
	
	I0308 01:28:56.374727   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:56.374727   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:57.963627   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:28:57.337535    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:28:57.338403    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:57.341909    4296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:28:57.341967    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:57.358821    4296 ssh_runner.go:195] Run: cat /version.json
	I0308 01:28:57.358821    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:28:59.726995    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:59.726995    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:59.738242    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:28:59.738408    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:28:59.738408    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:28:59.738632    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:01.817577   14284 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:29:01.817577   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:01.820721   14284 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube7/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0308 01:29:02.310723   14284 main.go:141] libmachine: Creating SSH key...
	I0308 01:29:02.505648   14284 main.go:141] libmachine: Creating VM...
	I0308 01:29:02.505648   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0308 01:29:02.517427    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:29:02.517495    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:02.517495    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:29:02.581115    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:29:02.581115    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:02.581871    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:29:02.704796    4296 ssh_runner.go:235] Completed: cat /version.json: (5.3459257s)
	I0308 01:29:02.704897    4296 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3628372s)
	I0308 01:29:02.715802    4296 ssh_runner.go:195] Run: systemctl --version
	I0308 01:29:02.738006    4296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:29:02.745609    4296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:29:02.755378    4296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:29:02.784503    4296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:29:02.784503    4296 start.go:494] detecting cgroup driver to use...
	I0308 01:29:02.784503    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:29:02.828784    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:29:02.858572    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:29:02.876733    4296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:29:02.888844    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:29:02.925656    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:29:02.975010    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:29:03.012592    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:29:03.047689    4296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:29:03.079703    4296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:29:03.121888    4296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:29:03.152031    4296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:29:03.179924    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:03.393493    4296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:29:03.424130    4296 start.go:494] detecting cgroup driver to use...
	I0308 01:29:03.436874    4296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:29:03.477114    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:29:03.513082    4296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:29:03.563020    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:29:03.601435    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:29:03.637613    4296 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:29:03.856671    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:29:03.879459    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:29:03.923328    4296 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:29:03.939522    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:29:03.956398    4296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:29:03.996726    4296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:29:04.204935    4296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:29:04.408789    4296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:29:04.408996    4296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:29:04.453363    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:04.650247    4296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:29:06.219372    4296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5691103s)
	I0308 01:29:06.231533    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:29:06.264324    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:29:06.301558    4296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:29:06.507331    4296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:29:06.695957    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:06.892725    4296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:29:06.932604    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:29:06.965421    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:07.177045    4296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:29:07.275626    4296 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:29:07.297656    4296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:29:07.318428    4296 start.go:562] Will wait 60s for crictl version
	I0308 01:29:07.332698    4296 ssh_runner.go:195] Run: which crictl
	I0308 01:29:07.351395    4296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:29:07.421295    4296 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:29:07.433169    4296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:29:07.481168    4296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:29:05.498776   14284 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0308 01:29:05.509686   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:05.509733   14284 main.go:141] libmachine: Using switch "Default Switch"
	I0308 01:29:05.509733   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0308 01:29:07.288320   14284 main.go:141] libmachine: [stdout =====>] : True
	
	I0308 01:29:07.288401   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:07.288533   14284 main.go:141] libmachine: Creating VHD
	I0308 01:29:07.288584   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0308 01:29:07.515039    4296 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:29:07.515039    4296 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:29:07.519451    4296 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:29:07.523963    4296 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:29:07.524043    4296 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:29:07.534565    4296 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:29:07.540976    4296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:29:07.562068    4296 kubeadm.go:877] updating cluster {Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:29:07.562629    4296 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:29:07.570935    4296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:29:07.597464    4296 docker.go:685] Got preloaded images: 
	I0308 01:29:07.597464    4296 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:29:07.608871    4296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:29:07.639344    4296 ssh_runner.go:195] Run: which lz4
	I0308 01:29:07.656448    4296 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:29:07.666711    4296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:29:07.667047    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:29:10.128643    4296 docker.go:649] duration metric: took 2.483628s to copy over tarball
	I0308 01:29:10.141105    4296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:29:11.443487   14284 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube7
	Path                    : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0194326F-3E01-4A40-86E0-D3138E67F54E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0308 01:29:11.443586   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:11.443677   14284 main.go:141] libmachine: Writing magic tar header
	I0308 01:29:11.443770   14284 main.go:141] libmachine: Writing SSH key tar header
	I0308 01:29:11.456176   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:14.654342   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd' -SizeBytes 20000MB
	I0308 01:29:17.452263   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:17.463255   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:17.463344   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0308 01:29:19.237034    4296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.0956412s)
	I0308 01:29:19.237034    4296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:29:19.321667    4296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:29:19.341678    4296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:29:19.389615    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:19.610569    4296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:29:23.596370   14284 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	calico-503300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0308 01:29:23.607494   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:23.607614   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName calico-503300 -DynamicMemoryEnabled $false
	I0308 01:29:23.446616    4296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.8359364s)
	I0308 01:29:23.457539    4296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:29:23.485281    4296 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:29:23.485281    4296 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:29:23.485281    4296 kubeadm.go:928] updating node { 172.20.59.53 8443 v1.28.4 docker true true} ...
	I0308 01:29:23.485848    4296 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.59.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0308 01:29:23.497187    4296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:29:23.531386    4296 cni.go:84] Creating CNI manager for "kindnet"
	I0308 01:29:23.531386    4296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:29:23.531386    4296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.59.53 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-503300 NodeName:kindnet-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.59.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.59.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:29:23.531918    4296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.59.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.59.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.59.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:29:23.543438    4296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:29:23.561128    4296 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:29:23.573283    4296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:29:23.593564    4296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0308 01:29:23.627397    4296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:29:23.656318    4296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0308 01:29:23.704450    4296 ssh_runner.go:195] Run: grep 172.20.59.53	control-plane.minikube.internal$ /etc/hosts
	I0308 01:29:23.710637    4296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.59.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:29:23.747766    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:23.947467    4296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:29:23.977331    4296 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300 for IP: 172.20.59.53
	I0308 01:29:23.977397    4296 certs.go:194] generating shared ca certs ...
	I0308 01:29:23.977397    4296 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:23.977936    4296 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:29:23.978004    4296 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:29:23.978004    4296 certs.go:256] generating profile certs ...
	I0308 01:29:23.979950    4296 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key
	I0308 01:29:23.980054    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt with IP's: []
	I0308 01:29:24.668337    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt ...
	I0308 01:29:24.668337    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.crt: {Name:mk8f465e51edeb407eb33cac94211a7e4a114757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.678831    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key ...
	I0308 01:29:24.678831    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\client.key: {Name:mk58a3b04c69666836f729848c9655d649721fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.680279    4296 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3
	I0308 01:29:24.680279    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.59.53]
	I0308 01:29:24.988426    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 ...
	I0308 01:29:24.988426    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3: {Name:mk95a2638b13f7ddc8d5da186f034c40eec335c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.996943    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3 ...
	I0308 01:29:24.996943    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3: {Name:mke8fbabdecb4785ede3c8aaad268f9abab5b5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:24.998570    4296 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt.7a7734c3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt
	I0308 01:29:24.998914    4296 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key.7a7734c3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key
	I0308 01:29:25.009100    4296 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key
	I0308 01:29:25.009892    4296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt with IP's: []
	I0308 01:29:25.081249    4296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt ...
	I0308 01:29:25.081249    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt: {Name:mk80dc2234368544f4797a59ca64fefb459352cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:25.091406    4296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key ...
	I0308 01:29:25.091406    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key: {Name:mkd2c1f44a1d4baf197686f0dcb458f5bf6bbd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:25.101906    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:29:25.104228    4296 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:29:25.104543    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:29:25.104543    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:29:25.105233    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:29:25.105694    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:29:25.106089    4296 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:29:25.106714    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:29:25.162950    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:29:25.208020    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:29:25.251784    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:29:25.298451    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:29:25.345109    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:29:25.399616    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:29:25.449923    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 01:29:25.500462    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:29:25.540109    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:29:25.579255    4296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:29:25.629559    4296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:29:25.670536    4296 ssh_runner.go:195] Run: openssl version
	I0308 01:29:25.688717    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:29:25.718652    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.726137    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.739775    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:29:25.759024    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:29:25.793766    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:29:25.824024    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.833565    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.845940    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:29:25.866936    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:29:25.896181    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:29:25.924715    4296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.931773    4296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.948285    4296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:29:25.975211    4296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:29:26.013282    4296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:29:26.021615    4296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:29:26.021615    4296 kubeadm.go:391] StartCluster: {Name:kindnet-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:kindnet-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:29:26.032056    4296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:29:26.074338    4296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:29:26.107788    4296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:29:26.144676    4296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:29:26.162092    4296 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:29:26.162092    4296 kubeadm.go:156] found existing configuration files:
	
	I0308 01:29:26.174546    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:29:26.191168    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:29:26.204032    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:29:26.235037    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:29:26.251753    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:29:26.264158    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:29:26.299752    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:29:26.322295    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:29:26.339118    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:29:26.371065    4296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:29:26.390208    4296 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:29:26.403906    4296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:29:26.421437    4296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:29:26.491372    4296 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:29:26.491372    4296 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:29:26.686250    4296 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:29:26.686347    4296 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:29:26.686347    4296 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:29:27.096139    4296 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:29:25.847509   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:25.852449   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:25.852633   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor calico-503300 -Count 2
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:28.090979   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\boot2docker.iso'
	I0308 01:29:27.107309    4296 out.go:204]   - Generating certificates and keys ...
	I0308 01:29:27.109744    4296 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:29:27.109982    4296 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:29:27.313995    4296 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:29:27.395081    4296 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:29:27.929680    4296 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:29:28.330719    4296 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:29:28.591316    4296 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:29:28.591377    4296 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-503300 localhost] and IPs [172.20.59.53 127.0.0.1 ::1]
	I0308 01:29:29.035541    4296 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:29:29.036134    4296 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-503300 localhost] and IPs [172.20.59.53 127.0.0.1 ::1]
	I0308 01:29:29.241840    4296 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:29:29.381770    4296 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:29:29.534867    4296 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:29:29.534867    4296 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:29:29.795063    4296 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:29:30.078442    4296 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:29:30.511356    4296 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:29:31.171264    4296 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:29:31.175285    4296 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:29:31.180490    4296 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:29:31.183259    4296 out.go:204]   - Booting up control plane ...
	I0308 01:29:31.183259    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:29:31.183925    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:29:31.186299    4296 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:29:31.218708    4296 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:29:31.220755    4296 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:29:31.220755    4296 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:29:30.629964   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:30.629964   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:30.630251   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName calico-503300 -Path 'C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\disk.vhd'
	I0308 01:29:33.294388   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:33.294455   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:33.294455   14284 main.go:141] libmachine: Starting VM...
	I0308 01:29:33.294455   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM calico-503300
	I0308 01:29:31.419492    4296 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:29:36.368100   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:36.368180   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:36.368261   14284 main.go:141] libmachine: Waiting for host to start...
	I0308 01:29:36.368261   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:38.735807   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:38.735885   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:38.736011   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:39.921440    4296 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.504608 seconds
	I0308 01:29:39.921440    4296 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:29:39.960928    4296 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:29:40.532501    4296 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:29:40.533394    4296 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:29:41.052261    4296 kubeadm.go:309] [bootstrap-token] Using token: 4fmjec.4wkw7d5f8hy8oofx
	I0308 01:29:41.054824    4296 out.go:204]   - Configuring RBAC rules ...
	I0308 01:29:41.055427    4296 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:29:41.066772    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:29:41.081384    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:29:41.105840    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:29:41.115482    4296 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:29:41.122916    4296 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:29:41.151732    4296 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:29:41.537117    4296 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:29:41.587087    4296 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:29:41.587844    4296 kubeadm.go:309] 
	I0308 01:29:41.588907    4296 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:29:41.588969    4296 kubeadm.go:309] 
	I0308 01:29:41.589145    4296 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:29:41.589145    4296 kubeadm.go:309] 
	I0308 01:29:41.589145    4296 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:29:41.589145    4296 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:29:41.589943    4296 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:29:41.589995    4296 kubeadm.go:309] 
	I0308 01:29:41.590330    4296 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:29:41.590434    4296 kubeadm.go:309] 
	I0308 01:29:41.590662    4296 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:29:41.590662    4296 kubeadm.go:309] 
	I0308 01:29:41.590893    4296 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:29:41.591103    4296 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:29:41.591824    4296 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:29:41.591936    4296 kubeadm.go:309] 
	I0308 01:29:41.592383    4296 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:29:41.592641    4296 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:29:41.592703    4296 kubeadm.go:309] 
	I0308 01:29:41.593063    4296 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4fmjec.4wkw7d5f8hy8oofx \
	I0308 01:29:41.593063    4296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:29:41.593063    4296 kubeadm.go:309] 	--control-plane 
	I0308 01:29:41.593063    4296 kubeadm.go:309] 
	I0308 01:29:41.593063    4296 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:29:41.593063    4296 kubeadm.go:309] 
	I0308 01:29:41.594433    4296 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4fmjec.4wkw7d5f8hy8oofx \
	I0308 01:29:41.594433    4296 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:29:41.595013    4296 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:29:41.595104    4296 cni.go:84] Creating CNI manager for "kindnet"
	I0308 01:29:41.598409    4296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0308 01:29:41.375904   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:41.375904   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:42.387224   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:44.574220   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:44.574353   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:44.574407   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:41.611149    4296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0308 01:29:41.613072    4296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 01:29:41.613072    4296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0308 01:29:41.672362    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 01:29:43.134826    4296 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4623368s)
	I0308 01:29:43.134912    4296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:29:43.149990    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:43.157655    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-503300 minikube.k8s.io/updated_at=2024_03_08T01_29_43_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=kindnet-503300 minikube.k8s.io/primary=true
	I0308 01:29:43.174950    4296 ops.go:34] apiserver oom_adj: -16
	I0308 01:29:43.341776    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:43.851833    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:44.352063    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:44.853198    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:45.365708    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:45.856545    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.015022   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:47.015022   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:48.017286   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:50.342571   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:50.342646   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:50.342677   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:46.352934    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:46.852122    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.349293    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:47.853230    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:48.350961    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:48.852196    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:49.362921    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:49.852530    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:50.359546    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:50.856190    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:51.344753    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:51.862615    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:52.351321    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:52.845661    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:53.362194    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:53.853843    4296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:29:54.074984    4296 kubeadm.go:1106] duration metric: took 10.9398512s to wait for elevateKubeSystemPrivileges
	W0308 01:29:54.075133    4296 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:29:54.075133    4296 kubeadm.go:393] duration metric: took 28.053254s to StartCluster
	I0308 01:29:54.075219    4296 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:54.075344    4296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:29:54.078651    4296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:29:54.079632    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:29:54.080187    4296 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.59.53 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:29:54.080187    4296 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:29:54.080329    4296 addons.go:69] Setting storage-provisioner=true in profile "kindnet-503300"
	I0308 01:29:54.080427    4296 addons.go:69] Setting default-storageclass=true in profile "kindnet-503300"
	I0308 01:29:54.080427    4296 addons.go:234] Setting addon storage-provisioner=true in "kindnet-503300"
	I0308 01:29:54.080427    4296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-503300"
	I0308 01:29:54.086238    4296 out.go:177] * Verifying Kubernetes components...
	I0308 01:29:52.944678   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:52.944678   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:53.952980   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:29:54.080427    4296 host.go:66] Checking if "kindnet-503300" exists ...
	I0308 01:29:54.080931    4296 config.go:182] Loaded profile config "kindnet-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:29:54.081795    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:54.091038    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:54.116771    4296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:29:54.674366    4296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:29:54.818016    4296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:29:55.609416    4296 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:29:55.614747    4296 node_ready.go:35] waiting up to 15m0s for node "kindnet-503300" to be "Ready" ...
	I0308 01:29:56.133476    4296 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-503300" context rescaled to 1 replicas
	I0308 01:29:56.920977    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.920977    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.921525    4296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:29:56.757873   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.760327   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.760500   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:29:59.916030   14284 main.go:141] libmachine: [stdout =====>] : 
	I0308 01:29:59.916085   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.927479    4296 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:29:56.927479    4296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:29:56.927479    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:56.950254    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:56.950306    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:56.953834    4296 addons.go:234] Setting addon default-storageclass=true in "kindnet-503300"
	I0308 01:29:56.953978    4296 host.go:66] Checking if "kindnet-503300" exists ...
	I0308 01:29:56.955352    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:57.634505    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:29:59.652306    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:59.652306    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:59.654763    4296 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:29:59.654859    4296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:29:59.654952    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-503300 ).state
	I0308 01:29:59.751391    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:29:59.751456    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:29:59.751456    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:00.123532    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:00.935874   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:03.702553   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:02.133778    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:02.280051    4296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:02.280535    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:02.280796    4296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:02.723194    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:30:02.723241    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:02.723881    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:30:02.908686    4296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:30:04.264217    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:04.437409    4296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.5271645s)
	I0308 01:30:05.423390    4296 main.go:141] libmachine: [stdout =====>] : 172.20.59.53
	
	I0308 01:30:05.423390    4296 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:05.423950    4296 sshutil.go:53] new ssh client: &{IP:172.20.59.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kindnet-503300\id_rsa Username:docker}
	I0308 01:30:05.573626    4296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:30:05.923243    4296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:30:05.925466    4296 addons.go:505] duration metric: took 11.8451669s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:30:06.453762   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:06.453762   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:06.460911   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:08.599201   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:08.599426   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:08.599426   14284 machine.go:94] provisionDockerMachine start ...
	I0308 01:30:08.599505   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:06.630161    4296 node_ready.go:53] node "kindnet-503300" has status "Ready":"False"
	I0308 01:30:07.628160    4296 node_ready.go:49] node "kindnet-503300" has status "Ready":"True"
	I0308 01:30:07.628160    4296 node_ready.go:38] duration metric: took 12.0131721s for node "kindnet-503300" to be "Ready" ...
	I0308 01:30:07.628160    4296 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:30:07.640804    4296 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.658628    4296 pod_ready.go:92] pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.658628    4296 pod_ready.go:81] duration metric: took 2.0178052s for pod "coredns-5dd5756b68-6srjt" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.658628    4296 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.667095    4296 pod_ready.go:92] pod "etcd-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.667095    4296 pod_ready.go:81] duration metric: took 8.4673ms for pod "etcd-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.667192    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.676193    4296 pod_ready.go:92] pod "kube-apiserver-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.676193    4296 pod_ready.go:81] duration metric: took 9.0008ms for pod "kube-apiserver-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.676193    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.685335    4296 pod_ready.go:92] pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.685422    4296 pod_ready.go:81] duration metric: took 9.2289ms for pod "kube-controller-manager-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.685422    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-gzd7d" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.693070    4296 pod_ready.go:92] pod "kube-proxy-gzd7d" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:09.693070    4296 pod_ready.go:81] duration metric: took 7.5277ms for pod "kube-proxy-gzd7d" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:09.693070    4296 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:10.056528    4296 pod_ready.go:92] pod "kube-scheduler-kindnet-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:30:10.056650    4296 pod_ready.go:81] duration metric: took 363.5764ms for pod "kube-scheduler-kindnet-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:30:10.056650    4296 pod_ready.go:38] duration metric: took 2.4284671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:30:10.056650    4296 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:30:10.066822    4296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:30:10.097409    4296 api_server.go:72] duration metric: took 16.0169295s to wait for apiserver process to appear ...
	I0308 01:30:10.097409    4296 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:30:10.097409    4296 api_server.go:253] Checking apiserver healthz at https://172.20.59.53:8443/healthz ...
	I0308 01:30:10.103907    4296 api_server.go:279] https://172.20.59.53:8443/healthz returned 200:
	ok
	I0308 01:30:10.108614    4296 api_server.go:141] control plane version: v1.28.4
	I0308 01:30:10.108674    4296 api_server.go:131] duration metric: took 11.2651ms to wait for apiserver health ...
	I0308 01:30:10.108739    4296 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:30:10.264607    4296 system_pods.go:59] 8 kube-system pods found
	I0308 01:30:10.264672    4296 system_pods.go:61] "coredns-5dd5756b68-6srjt" [685f0935-230c-4286-b225-28220d432dab] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "etcd-kindnet-503300" [b938033c-513e-46c3-b555-ade94b8be310] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kindnet-gb58t" [b90faeee-74b4-4a1c-9e75-d869293763cb] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-apiserver-kindnet-503300" [db3b13ea-ba3f-4fce-b11c-fe63cde5c504] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-controller-manager-kindnet-503300" [50f720f3-0896-459d-979f-41783837b456] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-proxy-gzd7d" [8a7e04cd-2cbd-44ba-a540-0de5f7f0a7a8] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "kube-scheduler-kindnet-503300" [1d6f8e0e-0a3c-475d-964b-1bf24163896a] Running
	I0308 01:30:10.264672    4296 system_pods.go:61] "storage-provisioner" [c50192b1-15cb-4cfa-afa9-2814304000e1] Running
	I0308 01:30:10.264672    4296 system_pods.go:74] duration metric: took 155.9314ms to wait for pod list to return data ...
	I0308 01:30:10.264767    4296 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:30:10.466180    4296 default_sa.go:45] found service account: "default"
	I0308 01:30:10.466180    4296 default_sa.go:55] duration metric: took 201.4111ms for default service account to be created ...
	I0308 01:30:10.466180    4296 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:30:10.679637    4296 system_pods.go:86] 8 kube-system pods found
	I0308 01:30:10.679637    4296 system_pods.go:89] "coredns-5dd5756b68-6srjt" [685f0935-230c-4286-b225-28220d432dab] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "etcd-kindnet-503300" [b938033c-513e-46c3-b555-ade94b8be310] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kindnet-gb58t" [b90faeee-74b4-4a1c-9e75-d869293763cb] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-apiserver-kindnet-503300" [db3b13ea-ba3f-4fce-b11c-fe63cde5c504] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-controller-manager-kindnet-503300" [50f720f3-0896-459d-979f-41783837b456] Running
	I0308 01:30:10.679637    4296 system_pods.go:89] "kube-proxy-gzd7d" [8a7e04cd-2cbd-44ba-a540-0de5f7f0a7a8] Running
	I0308 01:30:10.680225    4296 system_pods.go:89] "kube-scheduler-kindnet-503300" [1d6f8e0e-0a3c-475d-964b-1bf24163896a] Running
	I0308 01:30:10.680290    4296 system_pods.go:89] "storage-provisioner" [c50192b1-15cb-4cfa-afa9-2814304000e1] Running
	I0308 01:30:10.680290    4296 system_pods.go:126] duration metric: took 214.1078ms to wait for k8s-apps to be running ...
	I0308 01:30:10.680350    4296 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:30:10.692062    4296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:30:10.720346    4296 system_svc.go:56] duration metric: took 39.9957ms WaitForService to wait for kubelet
	I0308 01:30:10.720346    4296 kubeadm.go:576] duration metric: took 16.6398608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:30:10.720346    4296 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:30:10.858937    4296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:30:10.859018    4296 node_conditions.go:123] node cpu capacity is 2
	I0308 01:30:10.859056    4296 node_conditions.go:105] duration metric: took 138.7084ms to run NodePressure ...
	I0308 01:30:10.859101    4296 start.go:240] waiting for startup goroutines ...
	I0308 01:30:10.859172    4296 start.go:245] waiting for cluster config update ...
	I0308 01:30:10.859207    4296 start.go:254] writing updated cluster config ...
	I0308 01:30:10.872572    4296 ssh_runner.go:195] Run: rm -f paused
	I0308 01:30:11.006933    4296 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:30:11.013918    4296 out.go:177] * Done! kubectl is now configured to use "kindnet-503300" cluster and "default" namespace by default
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:10.728110   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:13.274663   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:13.274729   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:13.280577   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:13.281120   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:13.281120   14284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:30:13.414967   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0308 01:30:13.414967   14284 buildroot.go:166] provisioning hostname "calico-503300"
	I0308 01:30:13.414967   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:15.524725   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:18.054021   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:18.054067   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:18.058583   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:18.058757   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:18.058757   14284 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-503300 && echo "calico-503300" | sudo tee /etc/hostname
	I0308 01:30:18.244577   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-503300
	
	I0308 01:30:18.244577   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:20.627255   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:20.627371   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:20.627480   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:23.271666   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:23.271954   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:23.277672   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:23.278247   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:23.278324   14284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-503300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-503300/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-503300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:30:23.440906   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:30:23.440906   14284 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:30:23.440906   14284 buildroot.go:174] setting up certificates
	I0308 01:30:23.440906   14284 provision.go:84] configureAuth start
	I0308 01:30:23.440906   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:25.708510   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:25.708510   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:25.718651   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:28.381299   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:28.392546   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:28.392546   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:30.740270   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:30.752050   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:30.752050   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:33.329685   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:33.329745   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:33.329805   14284 provision.go:143] copyHostCerts
	I0308 01:30:33.330402   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:30:33.330402   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:30:33.330402   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:30:33.332442   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:30:33.332510   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:30:33.332921   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:30:33.334647   14284 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:30:33.334647   14284 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:30:33.335103   14284 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:30:33.336487   14284 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-503300 san=[127.0.0.1 172.20.55.16 calico-503300 localhost minikube]
	I0308 01:30:33.587115   14284 provision.go:177] copyRemoteCerts
	I0308 01:30:33.604578   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:30:33.604743   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:35.692546   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:35.704009   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:35.704143   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:38.265930   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:38.265930   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:38.274418   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:30:38.382038   14284 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7773585s)
	I0308 01:30:38.382628   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:30:38.428950   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:30:38.472932   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 01:30:38.525120   14284 provision.go:87] duration metric: took 15.0840734s to configureAuth
	I0308 01:30:38.525160   14284 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:30:38.525201   14284 config.go:182] Loaded profile config "calico-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:30:38.525201   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:40.716673   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:40.716673   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:40.727582   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:43.295120   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:43.304180   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:43.309162   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:43.309162   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:43.309162   14284 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:30:43.444738   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:30:43.444738   14284 buildroot.go:70] root file system type: tmpfs
	I0308 01:30:43.445351   14284 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:30:43.445351   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:45.753656   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:45.753994   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:45.754068   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:48.225870   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:48.225870   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:48.235541   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:48.235808   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:48.235808   14284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:30:48.396113   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:30:48.396190   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:50.508717   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:50.509183   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:50.509183   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:53.355761   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:53.355844   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:53.361642   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:30:53.361706   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:30:53.361706   14284 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:30:54.715626   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0308 01:30:54.715626   14284 machine.go:97] duration metric: took 46.1157699s to provisionDockerMachine
	I0308 01:30:54.715626   14284 client.go:171] duration metric: took 2m1.8590205s to LocalClient.Create
	I0308 01:30:54.715626   14284 start.go:167] duration metric: took 2m1.8590205s to libmachine.API.Create "calico-503300"
	I0308 01:30:54.715626   14284 start.go:293] postStartSetup for "calico-503300" (driver="hyperv")
	I0308 01:30:54.715626   14284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:30:54.733992   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:30:54.733992   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:30:56.941284   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:30:56.941284   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:56.941372   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:30:59.483619   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:30:59.483619   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:30:59.484141   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:30:59.592982   14284 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8589447s)
	I0308 01:30:59.605308   14284 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:30:59.613403   14284 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:30:59.613403   14284 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:30:59.613920   14284 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:30:59.615125   14284 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:30:59.628577   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:30:59.648436   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:30:59.698552   14284 start.go:296] duration metric: took 4.9828794s for postStartSetup
	I0308 01:30:59.702505   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:01.968424   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:01.968424   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:01.968710   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:04.692971   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:04.702098   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:04.702231   14284 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\config.json ...
	I0308 01:31:04.705014   14284 start.go:128] duration metric: took 2m11.8549515s to createHost
	I0308 01:31:04.705539   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:06.897073   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:06.897073   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:06.897221   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:09.409715   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:09.420735   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:09.429113   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:09.430817   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:31:09.430870   14284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:31:09.569210   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861469.582091327
	
	I0308 01:31:09.569210   14284 fix.go:216] guest clock: 1709861469.582091327
	I0308 01:31:09.569210   14284 fix.go:229] Guest: 2024-03-08 01:31:09.582091327 +0000 UTC Remote: 2024-03-08 01:31:04.7050146 +0000 UTC m=+419.527802701 (delta=4.877076727s)
	I0308 01:31:09.569793   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:11.885181   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:11.885181   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:11.895195   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:14.542297   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:14.543574   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:14.551967   14284 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:14.552643   14284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.55.16 22 <nil> <nil>}
	I0308 01:31:14.552911   14284 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861469
	I0308 01:31:14.717004   14284 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:31:09 UTC 2024
	
	I0308 01:31:14.717084   14284 fix.go:236] clock set: Fri Mar  8 01:31:09 UTC 2024
	 (err=<nil>)
	I0308 01:31:14.717084   14284 start.go:83] releasing machines lock for "calico-503300", held for 2m21.867637s
	I0308 01:31:14.717397   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:14.717447    3724 start.go:364] duration metric: took 5m27.8923768s to acquireMachinesLock for "pause-549000"
	I0308 01:31:14.717896    3724 start.go:96] Skipping create...Using existing machine configuration
	I0308 01:31:14.717975    3724 fix.go:54] fixHost starting: 
	I0308 01:31:14.718911    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:17.107281    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:17.107454    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:17.107584    3724 fix.go:112] recreateIfNeeded on pause-549000: state=Running err=<nil>
	W0308 01:31:17.107646    3724 fix.go:138] unexpected machine state, will restart: <nil>
	I0308 01:31:17.110751    3724 out.go:177] * Updating the running hyperv "pause-549000" VM ...
	I0308 01:31:17.076934   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:17.076934   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:17.077006   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:19.935728   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:19.935728   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:19.943135   14284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:31:19.943135   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:19.968466   14284 ssh_runner.go:195] Run: cat /version.json
	I0308 01:31:19.968466   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:31:17.114486    3724 machine.go:94] provisionDockerMachine start ...
	I0308 01:31:17.114486    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:19.434022    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:19.435629    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:19.435629    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.659895   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:22.659964   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.659964   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.689600   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:22.689600   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.690126   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:22.589211    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:22.589211    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:22.596795    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:22.597929    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:22.597984    3724 main.go:141] libmachine: About to run SSH command:
	hostname
	I0308 01:31:22.766761    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-549000
	
	I0308 01:31:22.766761    3724 buildroot.go:166] provisioning hostname "pause-549000"
	I0308 01:31:22.766761    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:25.456219    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:25.459073    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.459130    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:25.875843   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:25.875843   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.876147   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:31:25.961562   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:31:25.961562   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:25.962322   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:31:26.054987   14284 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (6.1116836s)
	I0308 01:31:26.077460   14284 ssh_runner.go:235] Completed: cat /version.json: (6.1088515s)
	I0308 01:31:26.092259   14284 ssh_runner.go:195] Run: systemctl --version
	I0308 01:31:26.114776   14284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0308 01:31:26.124699   14284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:31:26.138800   14284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:31:26.173083   14284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0308 01:31:26.173174   14284 start.go:494] detecting cgroup driver to use...
	I0308 01:31:26.173452   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:31:26.222376   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:31:26.264899   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:31:26.290520   14284 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:31:26.306609   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:31:26.339950   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:31:26.374752   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:31:26.414542   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:31:26.455420   14284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:31:26.488088   14284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:31:26.527257   14284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:31:26.568510   14284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:31:26.606138   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:26.815489   14284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:31:26.856958   14284 start.go:494] detecting cgroup driver to use...
	I0308 01:31:26.873986   14284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:31:26.927028   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:31:26.969428   14284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:31:27.026562   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:31:27.064799   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:31:27.103913   14284 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0308 01:31:27.167082   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:31:27.199112   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:31:27.254951   14284 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:31:27.285154   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:31:27.313229   14284 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:31:27.376877   14284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:31:27.617827   14284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:31:27.878036   14284 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:31:27.878410   14284 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:31:27.928833   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:28.120434   14284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:31:29.739394   14284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6189458s)
	I0308 01:31:29.754659   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:31:29.795554   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:31:29.830347   14284 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:31:30.093648   14284 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:31:30.352914   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:30.593532   14284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:31:30.657165   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:31:30.706398   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:30.902004   14284 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:31:31.029866   14284 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:31:31.047041   14284 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:31:31.060335   14284 start.go:562] Will wait 60s for crictl version
	I0308 01:31:31.074458   14284 ssh_runner.go:195] Run: which crictl
	I0308 01:31:31.093408   14284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:31:31.180724   14284 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:31:31.195379   14284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:31:31.245102   14284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:31:28.357796    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:28.357796    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:28.373315    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:28.374079    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:28.374079    3724 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-549000 && echo "pause-549000" | sudo tee /etc/hostname
	I0308 01:31:28.523105    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-549000
	
	I0308 01:31:28.523207    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:30.802240    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:30.816134    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:30.816134    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:31.308239   14284 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:31:31.308357   14284 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:31:31.317134   14284 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:31:31.321255   14284 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:31:31.321255   14284 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:31:31.337592   14284 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:31:31.342880   14284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:31:31.377221   14284 kubeadm.go:877] updating cluster {Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:31:31.377631   14284 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:31:31.391773   14284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:31:31.418648   14284 docker.go:685] Got preloaded images: 
	I0308 01:31:31.418793   14284 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0308 01:31:31.431755   14284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:31:31.469906   14284 ssh_runner.go:195] Run: which lz4
	I0308 01:31:31.491973   14284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0308 01:31:31.498683   14284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0308 01:31:31.498925   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0308 01:31:34.435195   14284 docker.go:649] duration metric: took 2.9563584s to copy over tarball
	I0308 01:31:34.453969   14284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0308 01:31:34.397259    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:34.397439    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:34.405826    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:34.406977    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:34.407087    3724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-549000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-549000/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-549000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0308 01:31:34.567755    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:31:34.567755    3724 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0308 01:31:34.568296    3724 buildroot.go:174] setting up certificates
	I0308 01:31:34.568296    3724 provision.go:84] configureAuth start
	I0308 01:31:34.568371    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:37.048506    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:37.048700    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:37.049329    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:40.037206    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:40.037206    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:40.039622    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:43.168632   14284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7142164s)
	I0308 01:31:43.168710   14284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0308 01:31:43.243663   14284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0308 01:31:43.263037   14284 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0308 01:31:43.320475   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:43.543692   14284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:42.347717    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:45.137447    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:45.139775    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:45.139775    3724 provision.go:143] copyHostCerts
	I0308 01:31:45.140154    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0308 01:31:45.140232    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0308 01:31:45.140719    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0308 01:31:45.141982    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0308 01:31:45.142071    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0308 01:31:45.142474    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0308 01:31:45.143757    3724 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0308 01:31:45.143757    3724 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0308 01:31:45.144288    3724 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0308 01:31:45.145683    3724 provision.go:117] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-549000 san=[127.0.0.1 172.20.54.215 localhost minikube pause-549000]
	I0308 01:31:45.715213    3724 provision.go:177] copyRemoteCerts
	I0308 01:31:45.731253    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0308 01:31:45.731253    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:47.286254   14284 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.7425274s)
	I0308 01:31:47.303988   14284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:31:47.362952   14284 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:31:47.362952   14284 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:31:47.363082   14284 kubeadm.go:928] updating node { 172.20.55.16 8443 v1.28.4 docker true true} ...
	I0308 01:31:47.363461   14284 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-503300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.55.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0308 01:31:47.375488   14284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:31:47.437948   14284 cni.go:84] Creating CNI manager for "calico"
	I0308 01:31:47.437948   14284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:31:47.437948   14284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.55.16 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-503300 NodeName:calico-503300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.55.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.55.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:31:47.437948   14284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.55.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-503300"
	  kubeletExtraArgs:
	    node-ip: 172.20.55.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.55.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:31:47.456562   14284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:31:47.484967   14284 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:31:47.498196   14284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:31:47.519219   14284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 01:31:47.558050   14284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:31:47.594526   14284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0308 01:31:47.652652   14284 ssh_runner.go:195] Run: grep 172.20.55.16	control-plane.minikube.internal$ /etc/hosts
	I0308 01:31:47.662592   14284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.55.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0308 01:31:47.704294   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:31:47.953483   14284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:31:47.980742   14284 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300 for IP: 172.20.55.16
	I0308 01:31:47.980742   14284 certs.go:194] generating shared ca certs ...
	I0308 01:31:47.980742   14284 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:47.986471   14284 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:31:47.986937   14284 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:31:47.987125   14284 certs.go:256] generating profile certs ...
	I0308 01:31:47.987882   14284 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key
	I0308 01:31:47.988043   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt with IP's: []
	I0308 01:31:48.166879   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt ...
	I0308 01:31:48.166879   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.crt: {Name:mkef29162d9ddc9479d5d9954eda9121f483432f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.168435   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key ...
	I0308 01:31:48.168435   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\client.key: {Name:mk770cd20d27827299fed4fccedf13ab7bf665de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.169736   14284 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0
	I0308 01:31:48.169922   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.55.16]
	I0308 01:31:48.531923   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 ...
	I0308 01:31:48.531923   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0: {Name:mke0e27e89fe08de672060d263a29c2ccc905996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.536652   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0 ...
	I0308 01:31:48.536652   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0: {Name:mk7f94d05b162d58e31a7c06c316d6bf3f534512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.538032   14284 certs.go:381] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt.8d89abe0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt
	I0308 01:31:48.552115   14284 certs.go:385] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key.8d89abe0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key
	I0308 01:31:48.552471   14284 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key
	I0308 01:31:48.553936   14284 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt with IP's: []
	I0308 01:31:48.666580   14284 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt ...
	I0308 01:31:48.666580   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt: {Name:mkac2d2459dde68e22d0324f5fae615dcb1db770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.671811   14284 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key ...
	I0308 01:31:48.671811   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key: {Name:mkfe130b3c366a72d0ebcc741131ab1500ca22b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:31:48.687532   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:31:48.688241   14284 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:31:48.688241   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:31:48.688778   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:31:48.689397   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:31:48.689397   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:31:48.690391   14284 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:31:48.693179   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:31:48.750483   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:31:48.790561   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:31:48.843293   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:31:48.896108   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:31:48.942706   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0308 01:31:49.005168   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:31:49.056308   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-503300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0308 01:31:49.110821   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:31:49.160925   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:31:49.209856   14284 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:31:49.257672   14284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:31:49.318563   14284 ssh_runner.go:195] Run: openssl version
	I0308 01:31:49.348284   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:31:49.388998   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.396986   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.412136   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:31:49.443862   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:31:49.492566   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:31:49.534797   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.542870   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.561032   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:31:49.590046   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:31:49.633579   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:31:49.672596   14284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.683565   14284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.698835   14284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:31:49.720547   14284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:31:49.757568   14284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:31:49.764034   14284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0308 01:31:49.764478   14284 kubeadm.go:391] StartCluster: {Name:calico-503300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:calico-503300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:31:49.778030   14284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:31:49.823931   14284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0308 01:31:49.858331   14284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:31:49.901131   14284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:31:49.919504   14284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0308 01:31:49.919559   14284 kubeadm.go:156] found existing configuration files:
	
	I0308 01:31:49.939554   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:31:49.958419   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0308 01:31:49.972407   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0308 01:31:50.003237   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:31:50.021664   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0308 01:31:50.037994   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0308 01:31:50.079191   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:31:50.097270   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0308 01:31:50.111198   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:31:50.149614   14284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:31:50.169065   14284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0308 01:31:50.183407   14284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:31:50.201570   14284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0308 01:31:48.153886    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:48.154176    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:48.154236    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:51.015958    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:51.020136    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:51.020315    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:31:51.142017    3724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.4106182s)
	I0308 01:31:51.142612    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0308 01:31:51.194966    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0308 01:31:51.244338    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0308 01:31:51.290128    3724 provision.go:87] duration metric: took 16.7216768s to configureAuth
	I0308 01:31:51.290128    3724 buildroot.go:189] setting minikube options for container-runtime
	I0308 01:31:51.290976    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:31:51.291120    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:50.515990   14284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0308 01:31:53.580699    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:53.580781    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:53.580781    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:31:56.407283    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:31:56.407283    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:56.416878    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:31:56.417566    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:31:56.417566    3724 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0308 01:31:56.566395    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0308 01:31:56.566395    3724 buildroot.go:70] root file system type: tmpfs
	I0308 01:31:56.567175    3724 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0308 01:31:56.567333    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:31:58.948936    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:31:58.949050    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:31:58.949050    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:06.550314   14284 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0308 01:32:06.553171   14284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0308 01:32:06.553245   14284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0308 01:32:06.553793   14284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0308 01:32:06.553986   14284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0308 01:32:06.554125   14284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0308 01:32:06.556868   14284 out.go:204]   - Generating certificates and keys ...
	I0308 01:32:06.557518   14284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0308 01:32:06.557733   14284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0308 01:32:06.557964   14284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0308 01:32:06.558298   14284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0308 01:32:06.558533   14284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0308 01:32:06.558776   14284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0308 01:32:06.559010   14284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0308 01:32:06.559121   14284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [calico-503300 localhost] and IPs [172.20.55.16 127.0.0.1 ::1]
	I0308 01:32:06.559440   14284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0308 01:32:06.559809   14284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [calico-503300 localhost] and IPs [172.20.55.16 127.0.0.1 ::1]
	I0308 01:32:06.560035   14284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0308 01:32:06.560216   14284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0308 01:32:06.560216   14284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0308 01:32:06.560386   14284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0308 01:32:06.560534   14284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0308 01:32:06.561047   14284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0308 01:32:06.561411   14284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0308 01:32:06.561607   14284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0308 01:32:06.565270   14284 out.go:204]   - Booting up control plane ...
	I0308 01:32:06.565328   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0308 01:32:06.565920   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0308 01:32:06.566034   14284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0308 01:32:06.567757   14284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0308 01:32:06.567911   14284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0308 01:32:06.567911   14284 kubeadm.go:309] [apiclient] All control plane components are healthy after 10.005664 seconds
	I0308 01:32:06.567911   14284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0308 01:32:06.569028   14284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0308 01:32:06.569028   14284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0308 01:32:06.569028   14284 kubeadm.go:309] [mark-control-plane] Marking the node calico-503300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0308 01:32:06.569028   14284 kubeadm.go:309] [bootstrap-token] Using token: ld1yy6.lquh2o9913bssi2z
	I0308 01:32:06.572237   14284 out.go:204]   - Configuring RBAC rules ...
	I0308 01:32:06.572784   14284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0308 01:32:06.573228   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0308 01:32:06.573688   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0308 01:32:06.573947   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0308 01:32:06.574347   14284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0308 01:32:06.574664   14284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0308 01:32:06.575090   14284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0308 01:32:06.575316   14284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0308 01:32:06.575359   14284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0308 01:32:06.575359   14284 kubeadm.go:309] 
	I0308 01:32:06.575359   14284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0308 01:32:06.575359   14284 kubeadm.go:309] 
	I0308 01:32:06.576006   14284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0308 01:32:06.576099   14284 kubeadm.go:309] 
	I0308 01:32:06.576170   14284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0308 01:32:06.576170   14284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0308 01:32:06.576569   14284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0308 01:32:06.576569   14284 kubeadm.go:309] 
	I0308 01:32:06.576569   14284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0308 01:32:06.577732   14284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0308 01:32:06.577882   14284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0308 01:32:06.577882   14284 kubeadm.go:309] 
	I0308 01:32:06.578463   14284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0308 01:32:06.578656   14284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0308 01:32:06.578656   14284 kubeadm.go:309] 
	I0308 01:32:06.578809   14284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ld1yy6.lquh2o9913bssi2z \
	I0308 01:32:06.580167   14284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 \
	I0308 01:32:06.580167   14284 kubeadm.go:309] 	--control-plane 
	I0308 01:32:06.580418   14284 kubeadm.go:309] 
	I0308 01:32:06.580770   14284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0308 01:32:06.580770   14284 kubeadm.go:309] 
	I0308 01:32:06.581011   14284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ld1yy6.lquh2o9913bssi2z \
	I0308 01:32:06.581320   14284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:5c928881ad3811f2986b9dca51dc5dbbe05e146204d738ffd17867bddd068a42 
	I0308 01:32:06.581320   14284 cni.go:84] Creating CNI manager for "calico"
	I0308 01:32:06.583717   14284 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0308 01:32:01.786492    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:01.791112    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:01.798191    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:01.799313    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:01.799491    3724 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0308 01:32:01.982658    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0308 01:32:01.982831    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:04.455244    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:04.456660    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:04.456660    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:06.587675   14284 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0308 01:32:06.588248   14284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (252439 bytes)
	I0308 01:32:06.731728   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0308 01:32:10.301441   14284 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.5696795s)
	I0308 01:32:10.301441   14284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:32:10.324910   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:10.326451   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-503300 minikube.k8s.io/updated_at=2024_03_08T01_32_10_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9c2ced1cce693d4d04abc192b43cb5294694bbd minikube.k8s.io/name=calico-503300 minikube.k8s.io/primary=true
	I0308 01:32:10.345197   14284 ops.go:34] apiserver oom_adj: -16
	I0308 01:32:07.354237    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:07.354237    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:07.361709    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:07.361709    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:07.362435    3724 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0308 01:32:07.508362    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0308 01:32:07.508362    3724 machine.go:97] duration metric: took 50.3934071s to provisionDockerMachine
	I0308 01:32:07.508500    3724 start.go:293] postStartSetup for "pause-549000" (driver="hyperv")
	I0308 01:32:07.508500    3724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0308 01:32:07.525595    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0308 01:32:07.525595    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:10.033934    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:10.034985    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:10.035202    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:10.594501   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:11.092068   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:11.603362   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.102353   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.604425   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:13.106122   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:13.613284   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:14.092885   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:14.597242   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:15.109011   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:12.886440    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:12.891049    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:12.891433    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:13.003262    3724 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.4775264s)
	I0308 01:32:13.025232    3724 ssh_runner.go:195] Run: cat /etc/os-release
	I0308 01:32:13.038451    3724 info.go:137] Remote host: Buildroot 2023.02.9
	I0308 01:32:13.038583    3724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0308 01:32:13.038583    3724 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0308 01:32:13.040286    3724 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem -> 83242.pem in /etc/ssl/certs
	I0308 01:32:13.055222    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0308 01:32:13.075712    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /etc/ssl/certs/83242.pem (1708 bytes)
	I0308 01:32:13.160118    3724 start.go:296] duration metric: took 5.6514936s for postStartSetup
	I0308 01:32:13.160264    3724 fix.go:56] duration metric: took 58.4417447s for fixHost
	I0308 01:32:13.160377    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:15.536028    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:15.547316    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:15.547316    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:15.599757   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:16.113880   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:16.612862   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:17.113933   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:17.603732   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.105417   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.598224   14284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0308 01:32:18.820443   14284 kubeadm.go:1106] duration metric: took 8.5189231s to wait for elevateKubeSystemPrivileges
	W0308 01:32:18.820626   14284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0308 01:32:18.820626   14284 kubeadm.go:393] duration metric: took 29.0558776s to StartCluster
	I0308 01:32:18.820883   14284 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:18.821090   14284 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:32:18.824700   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:18.827000   14284 start.go:234] Will wait 15m0s for node &{Name: IP:172.20.55.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:32:18.830010   14284 out.go:177] * Verifying Kubernetes components...
	I0308 01:32:18.827212   14284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:32:18.827795   14284 config.go:182] Loaded profile config "calico-503300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:32:18.827891   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0308 01:32:18.830010   14284 addons.go:69] Setting storage-provisioner=true in profile "calico-503300"
	I0308 01:32:18.830010   14284 addons.go:69] Setting default-storageclass=true in profile "calico-503300"
	I0308 01:32:18.833493   14284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-503300"
	I0308 01:32:18.833493   14284 addons.go:234] Setting addon storage-provisioner=true in "calico-503300"
	I0308 01:32:18.833598   14284 host.go:66] Checking if "calico-503300" exists ...
	I0308 01:32:18.834480   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:18.834908   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:18.858668   14284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:19.462671   14284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0308 01:32:19.563352   14284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:32:18.417725    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:18.417725    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:18.428830    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:18.429778    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:18.429814    3724 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0308 01:32:18.555707    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709861538.565947171
	
	I0308 01:32:18.555707    3724 fix.go:216] guest clock: 1709861538.565947171
	I0308 01:32:18.555707    3724 fix.go:229] Guest: 2024-03-08 01:32:18.565947171 +0000 UTC Remote: 2024-03-08 01:32:13.1603011 +0000 UTC m=+391.671528101 (delta=5.405646071s)
	I0308 01:32:18.555707    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:21.511772    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.528157    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.528157    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:21.861513   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.861513   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.865925   14284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0308 01:32:21.869793   14284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:32:21.869926   14284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0308 01:32:21.870016   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:21.929426   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:21.929496   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:21.936187   14284 addons.go:234] Setting addon default-storageclass=true in "calico-503300"
	I0308 01:32:21.936462   14284 host.go:66] Checking if "calico-503300" exists ...
	I0308 01:32:21.938134   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:22.408833   14284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.9461352s)
	I0308 01:32:22.408833   14284 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.845455s)
	I0308 01:32:22.408833   14284 start.go:948] {"host.minikube.internal": 172.20.48.1} host record injected into CoreDNS's ConfigMap
	I0308 01:32:22.413738   14284 node_ready.go:35] waiting up to 15m0s for node "calico-503300" to be "Ready" ...
	I0308 01:32:22.968752   14284 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-503300" context rescaled to 1 replicas
	I0308 01:32:24.423205   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:25.287277   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:25.287493   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.287586   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:25.587204    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:25.587304    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.596317    3724 main.go:141] libmachine: Using SSH client type: native
	I0308 01:32:25.597134    3724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x929d80] 0x92c960 <nil>  [] 0s} 172.20.54.215 22 <nil> <nil>}
	I0308 01:32:25.597134    3724 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709861538
	I0308 01:32:25.784230    3724 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Mar  8 01:32:18 UTC 2024
	
	I0308 01:32:25.784230    3724 fix.go:236] clock set: Fri Mar  8 01:32:18 UTC 2024
	 (err=<nil>)
	I0308 01:32:25.784230    3724 start.go:83] releasing machines lock for "pause-549000", held for 1m11.0658999s
	I0308 01:32:25.784230    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:25.506584   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:25.510139   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:25.510420   14284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0308 01:32:25.510420   14284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0308 01:32:25.510564   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-503300 ).state
	I0308 01:32:26.431837   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:28.471717   14284 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:28.473907   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.474156   14284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-503300 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:28.807817   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:32:28.807817   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.813008   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:32:28.938613   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:28.979472   14284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0308 01:32:30.145541   14284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1657935s)
	I0308 01:32:28.779718    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:28.788428    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:28.788428    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:31.431609   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:31.573513   14284 main.go:141] libmachine: [stdout =====>] : 172.20.55.16
	
	I0308 01:32:31.573513   14284 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:31.573902   14284 sshutil.go:53] new ssh client: &{IP:172.20.55.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-503300\id_rsa Username:docker}
	I0308 01:32:31.815458   14284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0308 01:32:32.315461   14284 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0308 01:32:32.319250   14284 addons.go:505] duration metric: took 13.491967s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0308 01:32:34.383531   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:31.954483    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:31.958462    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:31.962873    3724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0308 01:32:31.963031    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:31.983205    3724 ssh_runner.go:195] Run: cat /version.json
	I0308 01:32:31.983397    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-549000 ).state
	I0308 01:32:34.560082    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:34.560164    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:34.560267    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:34.655083    3724 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 01:32:34.655083    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:34.655202    3724 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-549000 ).networkadapters[0]).ipaddresses[0]
	I0308 01:32:36.686467   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:38.934637   14284 node_ready.go:53] node "calico-503300" has status "Ready":"False"
	I0308 01:32:37.763987    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:37.763987    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:37.771940    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:37.849496    3724 main.go:141] libmachine: [stdout =====>] : 172.20.54.215
	
	I0308 01:32:37.849496    3724 main.go:141] libmachine: [stderr =====>] : 
	I0308 01:32:37.849496    3724 sshutil.go:53] new ssh client: &{IP:172.20.54.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\pause-549000\id_rsa Username:docker}
	I0308 01:32:37.868364    3724 ssh_runner.go:235] Completed: cat /version.json: (5.8850483s)
	I0308 01:32:37.882222    3724 ssh_runner.go:195] Run: systemctl --version
	I0308 01:32:37.917749    3724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0308 01:32:39.940789    3724 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0230213s)
	W0308 01:32:39.940789    3724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0308 01:32:39.940789    3724 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.9777954s)
	W0308 01:32:39.941391    3724 start.go:862] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	W0308 01:32:39.941544    3724 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0308 01:32:39.941630    3724 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0308 01:32:39.954038    3724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0308 01:32:39.969958    3724 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0308 01:32:39.969958    3724 start.go:494] detecting cgroup driver to use...
	I0308 01:32:39.972112    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:32:40.039870    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0308 01:32:40.082416    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0308 01:32:40.112111    3724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0308 01:32:40.132293    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0308 01:32:40.169725    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:32:40.209615    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0308 01:32:40.243573    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0308 01:32:40.284812    3724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0308 01:32:40.323147    3724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0308 01:32:40.370756    3724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0308 01:32:40.407008    3724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0308 01:32:40.454931    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:40.747187    3724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0308 01:32:40.786235    3724 start.go:494] detecting cgroup driver to use...
	I0308 01:32:40.801148    3724 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0308 01:32:40.847229    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:32:40.888865    3724 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0308 01:32:40.956169    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0308 01:32:41.012361    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0308 01:32:41.055468    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0308 01:32:41.113185    3724 ssh_runner.go:195] Run: which cri-dockerd
	I0308 01:32:41.142380    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0308 01:32:41.163862    3724 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0308 01:32:41.216077    3724 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0308 01:32:41.448380   14284 node_ready.go:49] node "calico-503300" has status "Ready":"True"
	I0308 01:32:41.448514   14284 node_ready.go:38] duration metric: took 19.034386s for node "calico-503300" to be "Ready" ...
	I0308 01:32:41.448568   14284 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:32:41.473951   14284 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace to be "Ready" ...
	I0308 01:32:43.697040   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:41.705397    3724 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0308 01:32:42.245684    3724 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0308 01:32:42.245997    3724 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0308 01:32:42.346853    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:42.863726    3724 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0308 01:32:45.997524   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:48.495020   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:50.497376   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:52.546184   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:54.999141   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:55.057577    3724 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.1936506s)
	I0308 01:32:55.070127    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0308 01:32:55.124365    3724 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0308 01:32:55.176152    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:32:55.224044    3724 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0308 01:32:55.535376    3724 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0308 01:32:55.836340    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:56.086399    3724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0308 01:32:56.134533    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0308 01:32:56.183170    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:56.492756    3724 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0308 01:32:56.644796    3724 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0308 01:32:56.662378    3724 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0308 01:32:56.795379    3724 start.go:562] Will wait 60s for crictl version
	I0308 01:32:56.813454    3724 ssh_runner.go:195] Run: which crictl
	I0308 01:32:56.838565    3724 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0308 01:32:57.014099    3724 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0308 01:32:57.029571    3724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:32:57.089370    3724 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0308 01:32:57.496362   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:59.554087   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:32:57.136917    3724 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0308 01:32:57.137164    3724 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0308 01:32:57.144507    3724 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:6b:b0:49 Flags:up|broadcast|multicast|running}
	I0308 01:32:57.149765    3724 ip.go:210] interface addr: fe80::bb1a:f5e3:b4d7:df3b/64
	I0308 01:32:57.149765    3724 ip.go:210] interface addr: 172.20.48.1/20
	I0308 01:32:57.170678    3724 ssh_runner.go:195] Run: grep 172.20.48.1	host.minikube.internal$ /etc/hosts
	I0308 01:32:57.177951    3724 kubeadm.go:877] updating cluster {Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin
:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0308 01:32:57.178389    3724 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0308 01:32:57.191837    3724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:32:57.236219    3724 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:32:57.236219    3724 docker.go:615] Images already preloaded, skipping extraction
	I0308 01:32:57.250188    3724 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0308 01:32:57.331998    3724 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0308 01:32:57.332112    3724 cache_images.go:84] Images are preloaded, skipping loading
	I0308 01:32:57.332167    3724 kubeadm.go:928] updating node { 172.20.54.215 8443 v1.28.4 docker true true} ...
	I0308 01:32:57.332262    3724 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-549000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.54.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0308 01:32:57.347524    3724 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0308 01:32:57.431910    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:32:57.431910    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:32:57.431910    3724 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0308 01:32:57.431910    3724 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.54.215 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-549000 NodeName:pause-549000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.54.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.54.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0308 01:32:57.432450    3724 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.54.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-549000"
	  kubeletExtraArgs:
	    node-ip: 172.20.54.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.54.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0308 01:32:57.456499    3724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0308 01:32:57.482587    3724 binaries.go:44] Found k8s binaries, skipping transfer
	I0308 01:32:57.504743    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0308 01:32:57.524499    3724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0308 01:32:57.570114    3724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0308 01:32:57.671074    3724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0308 01:32:57.759484    3724 ssh_runner.go:195] Run: grep 172.20.54.215	control-plane.minikube.internal$ /etc/hosts
	I0308 01:32:57.787378    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:32:58.256950    3724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:32:58.311600    3724 certs.go:68] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000 for IP: 172.20.54.215
	I0308 01:32:58.311600    3724 certs.go:194] generating shared ca certs ...
	I0308 01:32:58.311600    3724 certs.go:226] acquiring lock for ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:32:58.312870    3724 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0308 01:32:58.313416    3724 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0308 01:32:58.313624    3724 certs.go:256] generating profile certs ...
	I0308 01:32:58.314389    3724 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\client.key
	I0308 01:32:58.314732    3724 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.key.61ed7ffd
	I0308 01:32:58.315195    3724 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.key
	I0308 01:32:58.317644    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem (1338 bytes)
	W0308 01:32:58.318207    3724 certs.go:480] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324_empty.pem, impossibly tiny 0 bytes
	I0308 01:32:58.318361    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0308 01:32:58.318843    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0308 01:32:58.319240    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0308 01:32:58.319545    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0308 01:32:58.319545    3724 certs.go:484] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem (1708 bytes)
	I0308 01:32:58.321838    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0308 01:32:58.479372    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0308 01:32:58.621191    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0308 01:32:58.727121    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0308 01:32:58.930703    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0308 01:32:59.164913    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0308 01:32:59.296663    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0308 01:32:59.410620    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-549000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0308 01:32:59.526978    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0308 01:32:59.649578    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\8324.pem --> /usr/share/ca-certificates/8324.pem (1338 bytes)
	I0308 01:32:59.765629    3724 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\83242.pem --> /usr/share/ca-certificates/83242.pem (1708 bytes)
	I0308 01:32:59.879840    3724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0308 01:32:59.968684    3724 ssh_runner.go:195] Run: openssl version
	I0308 01:33:00.022761    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0308 01:33:00.081169    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.088704    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.114084    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0308 01:33:00.146863    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0308 01:33:00.198124    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8324.pem && ln -fs /usr/share/ca-certificates/8324.pem /etc/ssl/certs/8324.pem"
	I0308 01:33:00.251750    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.261748    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 22:55 /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.286137    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8324.pem
	I0308 01:33:00.322709    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8324.pem /etc/ssl/certs/51391683.0"
	I0308 01:33:00.374386    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83242.pem && ln -fs /usr/share/ca-certificates/83242.pem /etc/ssl/certs/83242.pem"
	I0308 01:33:00.418362    3724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.429677    3724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 22:55 /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.444242    3724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83242.pem
	I0308 01:33:00.473574    3724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83242.pem /etc/ssl/certs/3ec20f2e.0"
	I0308 01:33:00.527482    3724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0308 01:33:00.557187    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0308 01:33:00.599946    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0308 01:33:00.644340    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0308 01:33:00.677432    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0308 01:33:00.714491    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0308 01:33:00.747220    3724 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0308 01:33:00.763151    3724 kubeadm.go:391] StartCluster: {Name:pause-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fa
lse olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0308 01:33:00.777563    3724 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:33:00.840918    3724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0308 01:33:00.866834    3724 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0308 01:33:00.866834    3724 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0308 01:33:00.866834    3724 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0308 01:33:00.884019    3724 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0308 01:33:00.905768    3724 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:00.909110    3724 kubeconfig.go:125] found "pause-549000" server: "https://172.20.54.215:8443"
	I0308 01:33:00.913854    3724 kapi.go:59] client config for pause-549000: &rest.Config{Host:"https://172.20.54.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-549000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\pause-549000\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d30520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0308 01:33:00.937550    3724 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0308 01:33:00.960943    3724 kubeadm.go:624] The running cluster does not require reconfiguration: 172.20.54.215
	I0308 01:33:00.961002    3724 kubeadm.go:1153] stopping kube-system containers ...
	I0308 01:33:00.972700    3724 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0308 01:33:01.030510    3724 docker.go:483] Stopping containers: [6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35]
	I0308 01:33:01.043286    3724 ssh_runner.go:195] Run: docker stop 6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35
	I0308 01:33:02.033488   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:04.483961   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:06.994377   14284 pod_ready.go:102] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"False"
	I0308 01:33:07.991022   14284 pod_ready.go:92] pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:07.991064   14284 pod_ready.go:81] duration metric: took 26.5168703s for pod "calico-kube-controllers-5fc7d6cf67-85tcb" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:07.991096   14284 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-ft27j" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.011295   14284 pod_ready.go:92] pod "calico-node-ft27j" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.011405   14284 pod_ready.go:81] duration metric: took 2.0202914s for pod "calico-node-ft27j" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.011481   14284 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.024490   14284 pod_ready.go:92] pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.024490   14284 pod_ready.go:81] duration metric: took 13.0089ms for pod "coredns-5dd5756b68-bfdql" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.024490   14284 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.029662   14284 pod_ready.go:92] pod "etcd-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.029662   14284 pod_ready.go:81] duration metric: took 5.172ms for pod "etcd-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.029662   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.042284   14284 pod_ready.go:92] pod "kube-apiserver-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.042338   14284 pod_ready.go:81] duration metric: took 12.6762ms for pod "kube-apiserver-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.042338   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.050208   14284 pod_ready.go:92] pod "kube-controller-manager-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.050208   14284 pod_ready.go:81] duration metric: took 7.8701ms for pod "kube-controller-manager-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.050208   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-fplhq" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.560604    3724 ssh_runner.go:235] Completed: docker stop 6cbd157ab876 1650ae73fce3 ca0870d599f1 8961256e70cb 0fe3021a276b 7e5dd9cf598f a4f413a3fab3 62c0412021bf 7a74af2b7663 96387479d692 c7cf0231ec49 0a1f04df7c18 7188ec9f8a67 b3d15e4a825c 5818f28c11b1 d8ce4d2e487d 27fc38536e0f cc0d865dfbff 8bf42ffc2d57 519087cb40bc 0d86f85b0efc 62063655f425 0431e581e1a9 96ac1ab8ac35: (9.517125s)
	I0308 01:33:10.580045    3724 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0308 01:33:10.663503    3724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0308 01:33:10.693512    3724 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar  8 01:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Mar  8 01:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar  8 01:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Mar  8 01:25 /etc/kubernetes/scheduler.conf
	
	I0308 01:33:10.707636    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0308 01:33:10.745474    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0308 01:33:10.777558    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0308 01:33:10.799546    3724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:10.816580    3724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0308 01:33:10.856857    3724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0308 01:33:10.882714    3724 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0308 01:33:10.899063    3724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0308 01:33:10.927760    3724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0308 01:33:10.944518    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:11.067680    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:10.417308   14284 pod_ready.go:92] pod "kube-proxy-fplhq" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.417308   14284 pod_ready.go:81] duration metric: took 367.0964ms for pod "kube-proxy-fplhq" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.417408   14284 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.816580   14284 pod_ready.go:92] pod "kube-scheduler-calico-503300" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:10.816580   14284 pod_ready.go:81] duration metric: took 399.1676ms for pod "kube-scheduler-calico-503300" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:10.816580   14284 pod_ready.go:38] duration metric: took 29.3676894s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:10.817165   14284 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:10.836829   14284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:10.871770   14284 api_server.go:72] duration metric: took 52.044227s to wait for apiserver process to appear ...
	I0308 01:33:10.872065   14284 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:10.872065   14284 api_server.go:253] Checking apiserver healthz at https://172.20.55.16:8443/healthz ...
	I0308 01:33:10.881083   14284 api_server.go:279] https://172.20.55.16:8443/healthz returned 200:
	ok
	I0308 01:33:10.886756   14284 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:10.886756   14284 api_server.go:131] duration metric: took 14.691ms to wait for apiserver health ...
	I0308 01:33:10.887303   14284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:11.036422   14284 system_pods.go:59] 9 kube-system pods found
	I0308 01:33:11.036422   14284 system_pods.go:61] "calico-kube-controllers-5fc7d6cf67-85tcb" [38a54c04-2e49-40fa-b967-ee05cd6fe5da] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "calico-node-ft27j" [b5373aca-4680-478b-8c0c-dc23e6c42dd5] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "coredns-5dd5756b68-bfdql" [6cfc0369-6ac8-4950-bc73-f73eb8930433] Running
	I0308 01:33:11.036422   14284 system_pods.go:61] "etcd-calico-503300" [1196d668-93b6-456c-9b38-dd4df91fc430] Running
	I0308 01:33:11.036989   14284 system_pods.go:61] "kube-apiserver-calico-503300" [2771afef-fd80-4658-85e3-a5922a7a24f9] Running
	I0308 01:33:11.037053   14284 system_pods.go:61] "kube-controller-manager-calico-503300" [0219c0b0-2485-4ab2-a40a-471399f6b59d] Running
	I0308 01:33:11.037089   14284 system_pods.go:61] "kube-proxy-fplhq" [2e488d0d-d07d-495b-9b04-db460bb0f650] Running
	I0308 01:33:11.037131   14284 system_pods.go:61] "kube-scheduler-calico-503300" [8f5f359e-c9a5-471e-b069-7cd6f272f204] Running
	I0308 01:33:11.037131   14284 system_pods.go:61] "storage-provisioner" [76cea2ce-35a3-41d2-aa15-bd300ad66a38] Running
	I0308 01:33:11.037170   14284 system_pods.go:74] duration metric: took 149.8656ms to wait for pod list to return data ...
	I0308 01:33:11.037170   14284 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:33:11.215548   14284 default_sa.go:45] found service account: "default"
	I0308 01:33:11.215548   14284 default_sa.go:55] duration metric: took 178.3766ms for default service account to be created ...
	I0308 01:33:11.215548   14284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:33:11.426531   14284 system_pods.go:86] 9 kube-system pods found
	I0308 01:33:11.426531   14284 system_pods.go:89] "calico-kube-controllers-5fc7d6cf67-85tcb" [38a54c04-2e49-40fa-b967-ee05cd6fe5da] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "calico-node-ft27j" [b5373aca-4680-478b-8c0c-dc23e6c42dd5] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "coredns-5dd5756b68-bfdql" [6cfc0369-6ac8-4950-bc73-f73eb8930433] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "etcd-calico-503300" [1196d668-93b6-456c-9b38-dd4df91fc430] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-apiserver-calico-503300" [2771afef-fd80-4658-85e3-a5922a7a24f9] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-controller-manager-calico-503300" [0219c0b0-2485-4ab2-a40a-471399f6b59d] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-proxy-fplhq" [2e488d0d-d07d-495b-9b04-db460bb0f650] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "kube-scheduler-calico-503300" [8f5f359e-c9a5-471e-b069-7cd6f272f204] Running
	I0308 01:33:11.426531   14284 system_pods.go:89] "storage-provisioner" [76cea2ce-35a3-41d2-aa15-bd300ad66a38] Running
	I0308 01:33:11.426531   14284 system_pods.go:126] duration metric: took 210.9809ms to wait for k8s-apps to be running ...
	I0308 01:33:11.426531   14284 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:33:11.446244   14284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:33:11.469822   14284 system_svc.go:56] duration metric: took 43.2907ms WaitForService to wait for kubelet
	I0308 01:33:11.471585   14284 kubeadm.go:576] duration metric: took 52.6440361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:33:11.471585   14284 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:11.603699   14284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:11.603699   14284 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:11.603699   14284 node_conditions.go:105] duration metric: took 132.1134ms to run NodePressure ...
	I0308 01:33:11.603699   14284 start.go:240] waiting for startup goroutines ...
	I0308 01:33:11.603699   14284 start.go:245] waiting for cluster config update ...
	I0308 01:33:11.603699   14284 start.go:254] writing updated cluster config ...
	I0308 01:33:11.619534   14284 ssh_runner.go:195] Run: rm -f paused
	I0308 01:33:11.784862   14284 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:33:11.788007   14284 out.go:177] * Done! kubectl is now configured to use "calico-503300" cluster and "default" namespace by default
	I0308 01:33:12.285351    3724 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2175822s)
	I0308 01:33:12.285351    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.672976    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.772749    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:12.897354    3724 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:12.910697    3724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:12.941451    3724 api_server.go:72] duration metric: took 43.9369ms to wait for apiserver process to appear ...
	I0308 01:33:12.941451    3724 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:12.941548    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:17.952426    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0308 01:33:17.952505    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:22.968019    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0308 01:33:22.968019    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:24.994360    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": read tcp 172.20.48.1:58524->172.20.54.215:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0308 01:33:24.994564    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:27.051479    3724 api_server.go:269] stopped: https://172.20.54.215:8443/healthz: Get "https://172.20.54.215:8443/healthz": dial tcp 172.20.54.215:8443: connectex: No connection could be made because the target machine actively refused it.
	I0308 01:33:27.051605    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:30.837681    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 01:33:30.837719    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 01:33:30.837719    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:30.922613    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0308 01:33:30.923703    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0308 01:33:30.953030    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.016429    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.016586    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:31.443461    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.457645    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.457645    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:31.947419    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:31.961418    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0308 01:33:31.961516    3724 api_server.go:103] status: https://172.20.54.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0308 01:33:32.450061    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:32.461345    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 200:
	ok
	I0308 01:33:32.480590    3724 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:32.480646    3724 api_server.go:131] duration metric: took 19.5390137s to wait for apiserver health ...
	I0308 01:33:32.480688    3724 cni.go:84] Creating CNI manager for ""
	I0308 01:33:32.480688    3724 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0308 01:33:32.483537    3724 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0308 01:33:32.494603    3724 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0308 01:33:32.523107    3724 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0308 01:33:32.564524    3724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:32.584227    3724 system_pods.go:59] 6 kube-system pods found
	I0308 01:33:32.584227    3724 system_pods.go:61] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0308 01:33:32.584227    3724 system_pods.go:61] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0308 01:33:32.584227    3724 system_pods.go:61] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0308 01:33:32.584769    3724 system_pods.go:61] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0308 01:33:32.584880    3724 system_pods.go:61] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0308 01:33:32.584880    3724 system_pods.go:61] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0308 01:33:32.584880    3724 system_pods.go:74] duration metric: took 20.3026ms to wait for pod list to return data ...
	I0308 01:33:32.584880    3724 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:32.592265    3724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:32.592265    3724 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:32.592265    3724 node_conditions.go:105] duration metric: took 7.385ms to run NodePressure ...
	I0308 01:33:32.592265    3724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0308 01:33:33.311154    3724 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0308 01:33:33.345015    3724 kubeadm.go:733] kubelet initialised
	I0308 01:33:33.345015    3724 kubeadm.go:734] duration metric: took 33.8052ms waiting for restarted kubelet to initialise ...
	I0308 01:33:33.345015    3724 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:33.369547    3724 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:34.993719    3724 pod_ready.go:92] pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:34.993816    3724 pod_ready.go:81] duration metric: took 1.6241565s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:34.993853    3724 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.012933    3724 pod_ready.go:92] pod "etcd-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.012933    3724 pod_ready.go:81] duration metric: took 2.0190607s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.012933    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.027883    3724 pod_ready.go:92] pod "kube-apiserver-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.027883    3724 pod_ready.go:81] duration metric: took 14.9505ms for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.027883    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.038715    3724 pod_ready.go:92] pod "kube-controller-manager-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.038715    3724 pod_ready.go:81] duration metric: took 10.8315ms for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.038715    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.048854    3724 pod_ready.go:92] pod "kube-proxy-z8xr2" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.048854    3724 pod_ready.go:81] duration metric: took 10.1392ms for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.048854    3724 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.062620    3724 pod_ready.go:92] pod "kube-scheduler-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:37.062677    3724 pod_ready.go:81] duration metric: took 13.7657ms for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:37.062677    3724 pod_ready.go:38] duration metric: took 3.7176278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:37.062732    3724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0308 01:33:37.087173    3724 ops.go:34] apiserver oom_adj: -16
	I0308 01:33:37.087269    3724 kubeadm.go:591] duration metric: took 36.2201028s to restartPrimaryControlPlane
	I0308 01:33:37.087321    3724 kubeadm.go:393] duration metric: took 36.3238928s to StartCluster
	I0308 01:33:37.087476    3724 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:33:37.087655    3724 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0308 01:33:37.091619    3724 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0308 01:33:37.093647    3724 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.54.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0308 01:33:37.093647    3724 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0308 01:33:37.094192    3724 config.go:182] Loaded profile config "pause-549000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 01:33:37.427652    3724 out.go:177] * Enabled addons: 
	I0308 01:33:37.377084    3724 out.go:177] * Verifying Kubernetes components...
	I0308 01:33:37.597442    3724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0308 01:33:37.614513    3724 addons.go:505] duration metric: took 520.8611ms for enable addons: enabled=[]
	I0308 01:33:37.956483    3724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0308 01:33:37.986344    3724 node_ready.go:35] waiting up to 6m0s for node "pause-549000" to be "Ready" ...
	I0308 01:33:37.992045    3724 node_ready.go:49] node "pause-549000" has status "Ready":"True"
	I0308 01:33:37.992045    3724 node_ready.go:38] duration metric: took 5.7015ms for node "pause-549000" to be "Ready" ...
	I0308 01:33:37.992045    3724 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:38.011395    3724 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.016764    3724 pod_ready.go:92] pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.016764    3724 pod_ready.go:81] duration metric: took 5.3684ms for pod "coredns-5dd5756b68-2q5bn" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.016764    3724 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.223033    3724 pod_ready.go:92] pod "etcd-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.223085    3724 pod_ready.go:81] duration metric: took 206.3199ms for pod "etcd-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.223085    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.613742    3724 pod_ready.go:92] pod "kube-apiserver-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:38.613742    3724 pod_ready.go:81] duration metric: took 390.6527ms for pod "kube-apiserver-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:38.613742    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.011541    3724 pod_ready.go:92] pod "kube-controller-manager-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.011693    3724 pod_ready.go:81] duration metric: took 397.9474ms for pod "kube-controller-manager-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.011693    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.420342    3724 pod_ready.go:92] pod "kube-proxy-z8xr2" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.420342    3724 pod_ready.go:81] duration metric: took 408.6454ms for pod "kube-proxy-z8xr2" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.420427    3724 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.818930    3724 pod_ready.go:92] pod "kube-scheduler-pause-549000" in "kube-system" namespace has status "Ready":"True"
	I0308 01:33:39.818930    3724 pod_ready.go:81] duration metric: took 398.4991ms for pod "kube-scheduler-pause-549000" in "kube-system" namespace to be "Ready" ...
	I0308 01:33:39.818930    3724 pod_ready.go:38] duration metric: took 1.8268672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0308 01:33:39.818930    3724 api_server.go:52] waiting for apiserver process to appear ...
	I0308 01:33:39.837796    3724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 01:33:39.869210    3724 api_server.go:72] duration metric: took 2.775537s to wait for apiserver process to appear ...
	I0308 01:33:39.869274    3724 api_server.go:88] waiting for apiserver healthz status ...
	I0308 01:33:39.869274    3724 api_server.go:253] Checking apiserver healthz at https://172.20.54.215:8443/healthz ...
	I0308 01:33:39.881871    3724 api_server.go:279] https://172.20.54.215:8443/healthz returned 200:
	ok
	I0308 01:33:39.884801    3724 api_server.go:141] control plane version: v1.28.4
	I0308 01:33:39.884914    3724 api_server.go:131] duration metric: took 15.5269ms to wait for apiserver health ...
	I0308 01:33:39.884914    3724 system_pods.go:43] waiting for kube-system pods to appear ...
	I0308 01:33:40.015167    3724 system_pods.go:59] 6 kube-system pods found
	I0308 01:33:40.015167    3724 system_pods.go:61] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running
	I0308 01:33:40.015167    3724 system_pods.go:61] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running
	I0308 01:33:40.015167    3724 system_pods.go:74] duration metric: took 130.2514ms to wait for pod list to return data ...
	I0308 01:33:40.015167    3724 default_sa.go:34] waiting for default service account to be created ...
	I0308 01:33:40.218050    3724 default_sa.go:45] found service account: "default"
	I0308 01:33:40.218050    3724 default_sa.go:55] duration metric: took 202.8813ms for default service account to be created ...
	I0308 01:33:40.218050    3724 system_pods.go:116] waiting for k8s-apps to be running ...
	I0308 01:33:40.413640    3724 system_pods.go:86] 6 kube-system pods found
	I0308 01:33:40.413640    3724 system_pods.go:89] "coredns-5dd5756b68-2q5bn" [f6d1c69d-3975-46dc-b037-11d53142d1f1] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "etcd-pause-549000" [486e4fef-9f89-4ac9-a7ac-68b4793b1fc1] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-apiserver-pause-549000" [1399376d-526e-4406-8bb0-da40ba4023eb] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-controller-manager-pause-549000" [90fcf813-4dab-47d7-8d70-a57106cc2358] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-proxy-z8xr2" [ff75380d-e287-4d97-bd11-67036d795d5a] Running
	I0308 01:33:40.413640    3724 system_pods.go:89] "kube-scheduler-pause-549000" [616d7e92-28f7-41b9-8f1e-18fbbf5e246f] Running
	I0308 01:33:40.413640    3724 system_pods.go:126] duration metric: took 195.5885ms to wait for k8s-apps to be running ...
	I0308 01:33:40.413640    3724 system_svc.go:44] waiting for kubelet service to be running ....
	I0308 01:33:40.427226    3724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 01:33:40.460832    3724 system_svc.go:56] duration metric: took 47.1919ms WaitForService to wait for kubelet
	I0308 01:33:40.461008    3724 kubeadm.go:576] duration metric: took 3.3673294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0308 01:33:40.461057    3724 node_conditions.go:102] verifying NodePressure condition ...
	I0308 01:33:40.610529    3724 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0308 01:33:40.610529    3724 node_conditions.go:123] node cpu capacity is 2
	I0308 01:33:40.610529    3724 node_conditions.go:105] duration metric: took 149.4199ms to run NodePressure ...
	I0308 01:33:40.610529    3724 start.go:240] waiting for startup goroutines ...
	I0308 01:33:40.610529    3724 start.go:245] waiting for cluster config update ...
	I0308 01:33:40.610529    3724 start.go:254] writing updated cluster config ...
	I0308 01:33:40.626078    3724 ssh_runner.go:195] Run: rm -f paused
	I0308 01:33:40.779094    3724 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0308 01:33:40.783065    3724 out.go:177] * Done! kubectl is now configured to use "pause-549000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.029443316Z" level=info msg="shim disconnected" id=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b namespace=moby
	Mar 08 01:33:25 pause-549000 dockerd[8317]: time="2024-03-08T01:33:25.029667717Z" level=info msg="ignoring event" container=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.030244721Z" level=warning msg="cleaning up after shim disconnected" id=a3aed9f888fa47f5a8f08b19b0c45e2e5f421ed50a15f7f25828639ba298851b namespace=moby
	Mar 08 01:33:25 pause-549000 dockerd[8323]: time="2024-03-08T01:33:25.030472723Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.551869276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.551984377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.552009277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.552195079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.606894376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.607008877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.607044177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.608956791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787138185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787228085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.787250485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:26 pause-549000 dockerd[8323]: time="2024-03-08T01:33:26.788033191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:31 pause-549000 cri-dockerd[8593]: time="2024-03-08T01:33:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.034410894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.037593417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.038030620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.041526246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047815891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047899992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.047924392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 08 01:33:33 pause-549000 dockerd[8323]: time="2024-03-08T01:33:33.048503996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fbd8e6022b6dc       83f6cc407eed8       About a minute ago   Running             kube-proxy                2                   1aedc88943e82       kube-proxy-z8xr2
	8cbe07a1943bd       ead0a4a53df89       About a minute ago   Running             coredns                   2                   f54c931dc863c       coredns-5dd5756b68-2q5bn
	70615601747b1       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            3                   50c6152f1e6ae       kube-apiserver-pause-549000
	15d872b3f05b6       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   2                   916b477f617a7       kube-controller-manager-pause-549000
	97d7744bd1667       73deb9a3f7025       About a minute ago   Running             etcd                      2                   79b4c6c608b30       etcd-pause-549000
	a3aed9f888fa4       7fe0e6f37db33       2 minutes ago        Exited              kube-apiserver            2                   50c6152f1e6ae       kube-apiserver-pause-549000
	be9eaddf3ddcc       e3db313c6dbc0       2 minutes ago        Running             kube-scheduler            2                   d2335daff70e6       kube-scheduler-pause-549000
	6cbd157ab876a       ead0a4a53df89       2 minutes ago        Exited              coredns                   1                   a4f413a3fab36       coredns-5dd5756b68-2q5bn
	1650ae73fce37       83f6cc407eed8       2 minutes ago        Exited              kube-proxy                1                   c7cf0231ec497       kube-proxy-z8xr2
	ca0870d599f16       d058aa5ab969c       2 minutes ago        Exited              kube-controller-manager   1                   62c0412021bfe       kube-controller-manager-pause-549000
	8961256e70cbe       73deb9a3f7025       2 minutes ago        Exited              etcd                      1                   7a74af2b7663b       etcd-pause-549000
	0fe3021a276bc       e3db313c6dbc0       2 minutes ago        Exited              kube-scheduler            1                   96387479d6922       kube-scheduler-pause-549000
	
	
	==> coredns [6cbd157ab876] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43621 - 31752 "HINFO IN 1724806026985328266.1499265499857429649. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.491006586s
	
	
	==> coredns [8cbe07a1943b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b0d01e750f1333b12a0afb000b64bd021779da79ee4f8aee5ecad4705d75b53898cf9670ad125c407f1c536554c13092ed2cbd72906f6f0aabed3ba5d92a353f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36044 - 58758 "HINFO IN 3020783146318684593.1048067044606582722. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062755353s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +0.094654] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.804530] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.124510] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.982461] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.129624] systemd-fstab-generator[3406]: Ignoring "noauto" option for root device
	[  +9.073674] kauditd_printk_skb: 82 callbacks suppressed
	[Mar 8 01:28] hrtimer: interrupt took 2131306 ns
	[Mar 8 01:32] systemd-fstab-generator[7887]: Ignoring "noauto" option for root device
	[  +0.843188] systemd-fstab-generator[7932]: Ignoring "noauto" option for root device
	[  +0.466922] systemd-fstab-generator[7944]: Ignoring "noauto" option for root device
	[  +0.739458] systemd-fstab-generator[7963]: Ignoring "noauto" option for root device
	[  +5.480217] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.247218] systemd-fstab-generator[8486]: Ignoring "noauto" option for root device
	[  +0.330010] systemd-fstab-generator[8498]: Ignoring "noauto" option for root device
	[  +0.265887] systemd-fstab-generator[8510]: Ignoring "noauto" option for root device
	[  +0.380888] systemd-fstab-generator[8552]: Ignoring "noauto" option for root device
	[  +1.643557] systemd-fstab-generator[8982]: Ignoring "noauto" option for root device
	[  +1.762188] kauditd_printk_skb: 179 callbacks suppressed
	[Mar 8 01:33] kauditd_printk_skb: 60 callbacks suppressed
	[  +2.019559] systemd-fstab-generator[10364]: Ignoring "noauto" option for root device
	[ +12.547255] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.138076] kauditd_printk_skb: 6 callbacks suppressed
	[  +4.559628] systemd-fstab-generator[11098]: Ignoring "noauto" option for root device
	[  +6.322832] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.104317] systemd-fstab-generator[11266]: Ignoring "noauto" option for root device
	
	
	==> etcd [8961256e70cb] <==
	{"level":"info","ts":"2024-03-08T01:33:00.130109Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"71.16665ms"}
	{"level":"info","ts":"2024-03-08T01:33:00.152372Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-08T01:33:00.275258Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","commit-index":611}
	{"level":"info","ts":"2024-03-08T01:33:00.279721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-08T01:33:00.283848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became follower at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:00.284425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8cb6433ac2f96c64 [peers: [], term: 2, commit: 611, applied: 0, lastindex: 611, lastterm: 2]"}
	{"level":"warn","ts":"2024-03-08T01:33:00.323467Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-03-08T01:33:00.3741Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":511}
	{"level":"info","ts":"2024-03-08T01:33:00.384444Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-03-08T01:33:00.417235Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"8cb6433ac2f96c64","timeout":"7s"}
	{"level":"info","ts":"2024-03-08T01:33:00.420184Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"8cb6433ac2f96c64"}
	{"level":"info","ts":"2024-03-08T01:33:00.420867Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8cb6433ac2f96c64","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-08T01:33:00.421594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.421936Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.422592Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:00.424123Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-08T01:33:00.425615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=(10139365530729540708)"}
	{"level":"info","ts":"2024-03-08T01:33:00.426299Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","added-peer-id":"8cb6433ac2f96c64","added-peer-peer-urls":["https://172.20.54.215:2380"]}
	{"level":"info","ts":"2024-03-08T01:33:00.427583Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:00.427998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:00.455983Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T01:33:00.457193Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8cb6433ac2f96c64","initial-advertise-peer-urls":["https://172.20.54.215:2380"],"listen-peer-urls":["https://172.20.54.215:2380"],"advertise-client-urls":["https://172.20.54.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.54.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T01:33:00.457393Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:00.465505Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:00.459028Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [97d7744bd166] <==
	{"level":"info","ts":"2024-03-08T01:33:26.933926Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-08T01:33:26.934722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 switched to configuration voters=(10139365530729540708)"}
	{"level":"info","ts":"2024-03-08T01:33:26.935129Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","added-peer-id":"8cb6433ac2f96c64","added-peer-peer-urls":["https://172.20.54.215:2380"]}
	{"level":"info","ts":"2024-03-08T01:33:26.935607Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f11ddc63fc62bb97","local-member-id":"8cb6433ac2f96c64","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:26.94044Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-08T01:33:26.965723Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:26.965972Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.54.215:2380"}
	{"level":"info","ts":"2024-03-08T01:33:26.965458Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-08T01:33:26.968531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-08T01:33:26.968452Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8cb6433ac2f96c64","initial-advertise-peer-urls":["https://172.20.54.215:2380"],"listen-peer-urls":["https://172.20.54.215:2380"],"advertise-client-urls":["https://172.20.54.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.54.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-08T01:33:28.566049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.566985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.567082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 received MsgPreVoteResp from 8cb6433ac2f96c64 at term 2"}
	{"level":"info","ts":"2024-03-08T01:33:28.567147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became candidate at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 received MsgVoteResp from 8cb6433ac2f96c64 at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8cb6433ac2f96c64 became leader at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.567213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8cb6433ac2f96c64 elected leader 8cb6433ac2f96c64 at term 3"}
	{"level":"info","ts":"2024-03-08T01:33:28.574147Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8cb6433ac2f96c64","local-member-attributes":"{Name:pause-549000 ClientURLs:[https://172.20.54.215:2379]}","request-path":"/0/members/8cb6433ac2f96c64/attributes","cluster-id":"f11ddc63fc62bb97","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-08T01:33:28.574197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T01:33:28.584696Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-08T01:33:28.587834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-08T01:33:28.587975Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-08T01:33:28.590586Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-08T01:33:28.599096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.54.215:2379"}
	{"level":"info","ts":"2024-03-08T01:33:34.996562Z","caller":"traceutil/trace.go:171","msg":"trace[1486148562] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"270.66915ms","start":"2024-03-08T01:33:34.725866Z","end":"2024-03-08T01:33:34.996535Z","steps":["trace[1486148562] 'process raft request'  (duration: 270.280648ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:35:37 up 12 min,  0 users,  load average: 0.24, 0.47, 0.27
	Linux pause-549000 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70615601747b] <==
	I0308 01:33:30.834417       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0308 01:33:30.834672       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0308 01:33:30.834927       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0308 01:33:30.964856       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0308 01:33:30.978557       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0308 01:33:30.980994       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0308 01:33:30.981032       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0308 01:33:30.984530       1 shared_informer.go:318] Caches are synced for configmaps
	I0308 01:33:30.984598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0308 01:33:30.990578       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0308 01:33:30.991187       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0308 01:33:31.034957       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0308 01:33:31.035246       1 aggregator.go:166] initial CRD sync complete...
	I0308 01:33:31.035439       1 autoregister_controller.go:141] Starting autoregister controller
	I0308 01:33:31.035623       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0308 01:33:31.035810       1 cache.go:39] Caches are synced for autoregister controller
	I0308 01:33:31.691172       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0308 01:33:32.160037       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.20.54.215]
	I0308 01:33:32.162240       1 controller.go:624] quota admission added evaluator for: endpoints
	I0308 01:33:32.172192       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0308 01:33:32.843774       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0308 01:33:32.908752       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0308 01:33:33.158417       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0308 01:33:33.224947       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0308 01:33:33.260353       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [a3aed9f888fa] <==
	I0308 01:33:04.114485       1 server.go:148] Version: v1.28.4
	I0308 01:33:04.114549       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0308 01:33:04.991400       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 01:33:04.991758       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	W0308 01:33:04.993294       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0308 01:33:05.000490       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0308 01:33:05.000509       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0308 01:33:05.000705       1 instance.go:298] Using reconciler: lease
	W0308 01:33:05.002437       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:05.992865       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:05.994630       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:06.003960       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.323122       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.672864       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:07.861575       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.428196       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.916772       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:09.932475       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:13.494746       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:13.884531       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:14.058513       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:19.313464       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:20.380166       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0308 01:33:21.289043       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0308 01:33:25.002835       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [15d872b3f05b] <==
	I0308 01:33:43.958734       1 shared_informer.go:318] Caches are synced for crt configmap
	I0308 01:33:43.959079       1 shared_informer.go:318] Caches are synced for node
	I0308 01:33:43.960037       1 range_allocator.go:174] "Sending events to api server"
	I0308 01:33:43.960493       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0308 01:33:43.960628       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0308 01:33:43.960643       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0308 01:33:43.962915       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0308 01:33:43.969883       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0308 01:33:43.971517       1 shared_informer.go:318] Caches are synced for PV protection
	I0308 01:33:43.978132       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0308 01:33:43.986592       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0308 01:33:43.986731       1 shared_informer.go:318] Caches are synced for endpoint
	I0308 01:33:43.992750       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0308 01:33:44.023813       1 shared_informer.go:318] Caches are synced for cronjob
	I0308 01:33:44.023993       1 shared_informer.go:318] Caches are synced for job
	I0308 01:33:44.031467       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0308 01:33:44.031998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="253.301µs"
	I0308 01:33:44.039068       1 shared_informer.go:318] Caches are synced for disruption
	I0308 01:33:44.039305       1 shared_informer.go:318] Caches are synced for deployment
	I0308 01:33:44.045652       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 01:33:44.054472       1 shared_informer.go:318] Caches are synced for stateful set
	I0308 01:33:44.117390       1 shared_informer.go:318] Caches are synced for resource quota
	I0308 01:33:44.510943       1 shared_informer.go:318] Caches are synced for garbage collector
	I0308 01:33:44.511553       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0308 01:33:44.514962       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [ca0870d599f1] <==
	
	
	==> kube-proxy [1650ae73fce3] <==
	
	
	==> kube-proxy [fbd8e6022b6d] <==
	I0308 01:33:33.445970       1 server_others.go:69] "Using iptables proxy"
	I0308 01:33:33.531068       1 node.go:141] Successfully retrieved node IP: 172.20.54.215
	I0308 01:33:33.625780       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0308 01:33:33.626023       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0308 01:33:33.630589       1 server_others.go:152] "Using iptables Proxier"
	I0308 01:33:33.631050       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0308 01:33:33.632240       1 server.go:846] "Version info" version="v1.28.4"
	I0308 01:33:33.632769       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0308 01:33:33.634252       1 config.go:188] "Starting service config controller"
	I0308 01:33:33.634409       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0308 01:33:33.635061       1 config.go:97] "Starting endpoint slice config controller"
	I0308 01:33:33.635103       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0308 01:33:33.635886       1 config.go:315] "Starting node config controller"
	I0308 01:33:33.635919       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0308 01:33:33.735854       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0308 01:33:33.735954       1 shared_informer.go:318] Caches are synced for service config
	I0308 01:33:33.736436       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0fe3021a276b] <==
	I0308 01:33:01.480806       1 serving.go:348] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [be9eaddf3ddc] <==
	W0308 01:33:30.896009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0308 01:33:30.896046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0308 01:33:30.896124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0308 01:33:30.896163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0308 01:33:30.896240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.896257       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.896358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0308 01:33:30.896377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0308 01:33:30.896456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.896495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.896586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0308 01:33:30.896637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0308 01:33:30.896720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0308 01:33:30.896756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0308 01:33:30.901422       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0308 01:33:30.901467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0308 01:33:30.901644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0308 01:33:30.901733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0308 01:33:30.901913       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.901997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0308 01:33:30.903067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0308 01:33:30.903121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0308 01:33:30.903138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0308 01:33:30.903148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0308 01:33:32.785495       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 08 01:33:30 pause-549000 kubelet[10371]: I0308 01:33:30.925982   10371 topology_manager.go:215] "Topology Admit Handler" podUID="ff75380d-e287-4d97-bd11-67036d795d5a" podNamespace="kube-system" podName="kube-proxy-z8xr2"
	Mar 08 01:33:30 pause-549000 kubelet[10371]: I0308 01:33:30.937069   10371 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.946476   10371 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.946551   10371 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.946637   10371 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.946694   10371 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: W0308 01:33:30.952989   10371 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:30 pause-549000 kubelet[10371]: E0308 01:33:30.953037   10371 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:pause-549000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-549000' and this object
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.050392   10371 kubelet_node_status.go:108] "Node was previously registered" node="pause-549000"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.050937   10371 kubelet_node_status.go:73] "Successfully registered node" node="pause-549000"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.059029   10371 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.060367   10371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff75380d-e287-4d97-bd11-67036d795d5a-lib-modules\") pod \"kube-proxy-z8xr2\" (UID: \"ff75380d-e287-4d97-bd11-67036d795d5a\") " pod="kube-system/kube-proxy-z8xr2"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.064394   10371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff75380d-e287-4d97-bd11-67036d795d5a-xtables-lock\") pod \"kube-proxy-z8xr2\" (UID: \"ff75380d-e287-4d97-bd11-67036d795d5a\") " pod="kube-system/kube-proxy-z8xr2"
	Mar 08 01:33:31 pause-549000 kubelet[10371]: I0308 01:33:31.066607   10371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.066301   10371 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.067173   10371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff75380d-e287-4d97-bd11-67036d795d5a-kube-proxy podName:ff75380d-e287-4d97-bd11-67036d795d5a nodeName:}" failed. No retries permitted until 2024-03-08 01:33:32.567094112 +0000 UTC m=+19.871234118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ff75380d-e287-4d97-bd11-67036d795d5a-kube-proxy") pod "kube-proxy-z8xr2" (UID: "ff75380d-e287-4d97-bd11-67036d795d5a") : failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.066921   10371 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: E0308 01:33:32.067273   10371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6d1c69d-3975-46dc-b037-11d53142d1f1-config-volume podName:f6d1c69d-3975-46dc-b037-11d53142d1f1 nodeName:}" failed. No retries permitted until 2024-03-08 01:33:32.567244413 +0000 UTC m=+19.871384419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f6d1c69d-3975-46dc-b037-11d53142d1f1-config-volume") pod "coredns-5dd5756b68-2q5bn" (UID: "f6d1c69d-3975-46dc-b037-11d53142d1f1") : failed to sync configmap cache: timed out waiting for the condition
	Mar 08 01:33:32 pause-549000 kubelet[10371]: I0308 01:33:32.728535   10371 scope.go:117] "RemoveContainer" containerID="1650ae73fce37e29f70126d4d3083c38fe8bd2e6c0d46b2fb8a9a3e885b5c364"
	Mar 08 01:33:32 pause-549000 kubelet[10371]: I0308 01:33:32.729581   10371 scope.go:117] "RemoveContainer" containerID="6cbd157ab876a94115260ab401f8c0813ec91011ef37275b333805e994dc04d9"
	Mar 08 01:33:49 pause-549000 kubelet[10371]: I0308 01:33:49.318431   10371 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Mar 08 01:33:49 pause-549000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Mar 08 01:33:49 pause-549000 systemd[1]: kubelet.service: Deactivated successfully.
	Mar 08 01:33:49 pause-549000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 08 01:33:49 pause-549000 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:35:14.833912   14284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-549000 -n pause-549000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-549000 -n pause-549000: exit status 2 (14.4353014s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:35:41.492674    6584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-549000" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/Unpause (112.28s)

                                                
                                    

Test pass (170/216)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.12
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.36
9 TestDownloadOnly/v1.20.0/DeleteAll 1.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.29
12 TestDownloadOnly/v1.28.4/json-events 12.41
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.52
18 TestDownloadOnly/v1.28.4/DeleteAll 1.39
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.29
21 TestDownloadOnly/v1.29.0-rc.2/json-events 10.59
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0.04
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.47
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.26
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.24
30 TestBinaryMirror 6.13
31 TestOffline 381.05
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.26
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
36 TestAddons/Setup 354.61
39 TestAddons/parallel/Ingress 67.37
40 TestAddons/parallel/InspektorGadget 27.09
41 TestAddons/parallel/MetricsServer 21.54
42 TestAddons/parallel/HelmTiller 26.55
44 TestAddons/parallel/CSI 101.17
45 TestAddons/parallel/Headlamp 34.49
46 TestAddons/parallel/CloudSpanner 21.1
47 TestAddons/parallel/LocalPath 32.27
48 TestAddons/parallel/NvidiaDevicePlugin 22.17
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.34
53 TestAddons/StoppedEnableDisable 50.17
54 TestCertOptions 467.39
55 TestCertExpiration 870.17
56 TestDockerFlags 532.62
57 TestForceSystemdFlag 242.44
58 TestForceSystemdEnv 407.03
65 TestErrorSpam/start 15.98
66 TestErrorSpam/status 32.74
67 TestErrorSpam/pause 20.2
68 TestErrorSpam/unpause 20.08
69 TestErrorSpam/stop 53.39
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 215.72
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 114.15
76 TestFunctional/serial/KubeContext 0.12
77 TestFunctional/serial/KubectlGetPods 0.21
80 TestFunctional/serial/CacheCmd/cache/add_remote 24.49
81 TestFunctional/serial/CacheCmd/cache/add_local 9.63
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
83 TestFunctional/serial/CacheCmd/cache/list 0.26
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.61
85 TestFunctional/serial/CacheCmd/cache/cache_reload 33.17
86 TestFunctional/serial/CacheCmd/cache/delete 0.52
87 TestFunctional/serial/MinikubeKubectlCmd 0.42
89 TestFunctional/serial/ExtraConfig 116.78
90 TestFunctional/serial/ComponentHealth 0.17
91 TestFunctional/serial/LogsCmd 8.09
92 TestFunctional/serial/LogsFileCmd 9.88
93 TestFunctional/serial/InvalidService 19.63
99 TestFunctional/parallel/StatusCmd 38.57
103 TestFunctional/parallel/ServiceCmdConnect 24.93
104 TestFunctional/parallel/AddonsCmd 0.74
105 TestFunctional/parallel/PersistentVolumeClaim 47.24
107 TestFunctional/parallel/SSHCmd 17.34
108 TestFunctional/parallel/CpCmd 55.83
109 TestFunctional/parallel/MySQL 68.23
110 TestFunctional/parallel/FileSync 9.56
111 TestFunctional/parallel/CertSync 59.72
115 TestFunctional/parallel/NodeLabels 0.19
117 TestFunctional/parallel/NonActiveRuntimeDisabled 10.4
119 TestFunctional/parallel/License 2.86
120 TestFunctional/parallel/ServiceCmd/DeployApp 20.46
121 TestFunctional/parallel/ProfileCmd/profile_not_create 11.27
122 TestFunctional/parallel/ProfileCmd/profile_list 10.2
123 TestFunctional/parallel/ServiceCmd/List 13
124 TestFunctional/parallel/ProfileCmd/profile_json_output 10.74
125 TestFunctional/parallel/ServiceCmd/JSONOutput 13.13
127 TestFunctional/parallel/Version/short 0.26
128 TestFunctional/parallel/Version/components 7.29
129 TestFunctional/parallel/ImageCommands/ImageListShort 7.06
130 TestFunctional/parallel/ImageCommands/ImageListTable 6.88
131 TestFunctional/parallel/ImageCommands/ImageListJson 7.15
132 TestFunctional/parallel/ImageCommands/ImageListYaml 7.21
133 TestFunctional/parallel/ImageCommands/ImageBuild 24.07
134 TestFunctional/parallel/ImageCommands/Setup 4.03
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 22.3
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 18.88
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 25.69
140 TestFunctional/parallel/DockerEnv/powershell 40.1
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.36
142 TestFunctional/parallel/UpdateContextCmd/no_changes 2.26
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.4
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.4
145 TestFunctional/parallel/ImageCommands/ImageRemove 14.73
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 16.74
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9
149 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.39
150 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.51
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
159 TestFunctional/delete_addon-resizer_images 0.45
160 TestFunctional/delete_my-image_image 0.18
161 TestFunctional/delete_minikube_cached_images 0.18
165 TestMutliControlPlane/serial/StartCluster 667.32
166 TestMutliControlPlane/serial/DeployApp 12.41
168 TestMutliControlPlane/serial/AddWorkerNode 235.94
169 TestMutliControlPlane/serial/NodeLabels 0.18
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 26.91
171 TestMutliControlPlane/serial/CopyFile 592.69
172 TestMutliControlPlane/serial/StopSecondaryNode 67.49
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 19.31
177 TestImageBuild/serial/Setup 180.96
178 TestImageBuild/serial/NormalBuild 8.99
179 TestImageBuild/serial/BuildWithBuildArg 8.13
180 TestImageBuild/serial/BuildWithDockerIgnore 7.16
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7
185 TestJSONOutput/start/Command 223.97
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 7.33
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 7.02
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 31.55
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 1.35
213 TestMainNoArgs 0.22
214 TestMinikubeProfile 487.45
217 TestMountStart/serial/StartWithMountFirst 133.64
218 TestMountStart/serial/VerifyMountFirst 8.41
219 TestMountStart/serial/StartWithMountSecond 137.2
220 TestMountStart/serial/VerifyMountSecond 8.75
221 TestMountStart/serial/DeleteFirst 28.35
222 TestMountStart/serial/VerifyMountPostDelete 8.55
223 TestMountStart/serial/Stop 24.24
224 TestMountStart/serial/RestartStopped 105.79
225 TestMountStart/serial/VerifyMountPostStop 8.73
228 TestMultiNode/serial/FreshStart2Nodes 386.57
229 TestMultiNode/serial/DeployApp2Nodes 8.97
231 TestMultiNode/serial/AddNode 203.95
232 TestMultiNode/serial/MultiNodeLabels 0.16
233 TestMultiNode/serial/ProfileList 10.87
234 TestMultiNode/serial/CopyFile 327.87
235 TestMultiNode/serial/StopNode 67.07
236 TestMultiNode/serial/StartAfterStop 160.56
238 TestMultiNode/serial/DeleteNode 62.9
243 TestPreload 484.05
244 TestScheduledStopWindows 310.96
249 TestRunningBinaryUpgrade 984.97
251 TestKubernetesUpgrade 1115.08
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
267 TestStoppedBinaryUpgrade/Setup 1.05
268 TestStoppedBinaryUpgrade/Upgrade 779.21
277 TestPause/serial/Start 392.23
279 TestStoppedBinaryUpgrade/MinikubeLogs 9.13
282 TestPause/serial/SecondStartNoReconfiguration 479.4
297 TestPause/serial/Pause 8.8
301 TestPause/serial/VerifyStatus 13.84
x
+
TestDownloadOnly/v1.20.0/json-events (18.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-244600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-244600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (18.1225355s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-244600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-244600: exit status 85 (362.498ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:37 UTC |          |
	|         | -p download-only-244600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:37:42
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:37:42.557078     184 out.go:291] Setting OutFile to fd 476 ...
	I0307 22:37:42.557647     184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:37:42.557647     184 out.go:304] Setting ErrFile to fd 628...
	I0307 22:37:42.557647     184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 22:37:42.570864     184 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0307 22:37:42.583049     184 out.go:298] Setting JSON to true
	I0307 22:37:42.586262     184 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10016,"bootTime":1709841045,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 22:37:42.586262     184 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 22:37:42.598459     184 out.go:97] [download-only-244600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	W0307 22:37:42.600991     184 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0307 22:37:42.600991     184 notify.go:220] Checking for updates...
	I0307 22:37:42.604019     184 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:37:42.606838     184 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 22:37:42.609363     184 out.go:169] MINIKUBE_LOCATION=16214
	I0307 22:37:42.611587     184 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0307 22:37:42.616678     184 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 22:37:42.617989     184 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:37:47.804343     184 out.go:97] Using the hyperv driver based on user configuration
	I0307 22:37:47.810030     184 start.go:297] selected driver: hyperv
	I0307 22:37:47.810030     184 start.go:901] validating driver "hyperv" against <nil>
	I0307 22:37:47.810094     184 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 22:37:47.861910     184 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0307 22:37:47.862982     184 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 22:37:47.862982     184 cni.go:84] Creating CNI manager for ""
	I0307 22:37:47.862982     184 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0307 22:37:47.862982     184 start.go:340] cluster config:
	{Name:download-only-244600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-244600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:37:47.864541     184 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:37:47.869496     184 out.go:97] Downloading VM boot image ...
	I0307 22:37:47.870019     184 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1708638130-18020-amd64.iso
	I0307 22:37:52.167166     184 out.go:97] Starting "download-only-244600" primary control-plane node in "download-only-244600" cluster
	I0307 22:37:52.179422     184 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 22:37:52.211404     184 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0307 22:37:52.224215     184 cache.go:56] Caching tarball of preloaded images
	I0307 22:37:52.224766     184 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0307 22:37:52.622165     184 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 22:37:52.640575     184 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:37:52.707874     184 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0307 22:37:56.758099     184 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:37:56.761394     184 preload.go:255] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-244600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-244600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:38:00.703299    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1928493s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-244600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-244600: (1.2912277s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-219100 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-219100 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (12.4094445s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-219100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-219100: exit status 85 (517.9101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:37 UTC |                     |
	|         | -p download-only-244600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-244600        | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | -o=json --download-only        | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | -p download-only-219100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:38:03
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:38:03.621388    5056 out.go:291] Setting OutFile to fd 444 ...
	I0307 22:38:03.621471    5056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:03.621471    5056 out.go:304] Setting ErrFile to fd 440...
	I0307 22:38:03.622047    5056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:03.641549    5056 out.go:298] Setting JSON to true
	I0307 22:38:03.648843    5056 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10037,"bootTime":1709841045,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 22:38:03.648843    5056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 22:38:03.657075    5056 out.go:97] [download-only-219100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 22:38:03.664523    5056 notify.go:220] Checking for updates...
	I0307 22:38:03.669845    5056 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:38:03.680115    5056 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 22:38:03.691142    5056 out.go:169] MINIKUBE_LOCATION=16214
	I0307 22:38:03.700063    5056 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0307 22:38:03.712349    5056 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 22:38:03.714374    5056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:38:08.928174    5056 out.go:97] Using the hyperv driver based on user configuration
	I0307 22:38:08.932768    5056 start.go:297] selected driver: hyperv
	I0307 22:38:08.932893    5056 start.go:901] validating driver "hyperv" against <nil>
	I0307 22:38:08.932893    5056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 22:38:08.981326    5056 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0307 22:38:08.983623    5056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 22:38:08.983623    5056 cni.go:84] Creating CNI manager for ""
	I0307 22:38:08.983623    5056 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:38:08.984114    5056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 22:38:08.984114    5056 start.go:340] cluster config:
	{Name:download-only-219100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-219100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:38:08.984114    5056 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:38:08.986066    5056 out.go:97] Starting "download-only-219100" primary control-plane node in "download-only-219100" cluster
	I0307 22:38:08.986066    5056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 22:38:09.031580    5056 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0307 22:38:09.031580    5056 cache.go:56] Caching tarball of preloaded images
	I0307 22:38:09.032054    5056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0307 22:38:09.035997    5056 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 22:38:09.035997    5056 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:38:09.104918    5056 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-219100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-219100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:38:15.953908    9348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3840698s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-219100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-219100: (1.2891653s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-409000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-409000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (10.5915013s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-409000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-409000: exit status 85 (463.3465ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:37 UTC |                     |
	|         | -p download-only-244600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-244600           | download-only-244600 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | -o=json --download-only           | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | -p download-only-219100           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| delete  | -p download-only-219100           | download-only-219100 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC | 07 Mar 24 22:38 UTC |
	| start   | -o=json --download-only           | download-only-409000 | minikube7\jenkins | v1.32.0 | 07 Mar 24 22:38 UTC |                     |
	|         | -p download-only-409000           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:38:19
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:38:19.229091   13556 out.go:291] Setting OutFile to fd 748 ...
	I0307 22:38:19.229779   13556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:19.229779   13556 out.go:304] Setting ErrFile to fd 768...
	I0307 22:38:19.229779   13556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:38:19.253309   13556 out.go:298] Setting JSON to true
	I0307 22:38:19.255454   13556 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10053,"bootTime":1709841045,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 22:38:19.255454   13556 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 22:38:19.257998   13556 out.go:97] [download-only-409000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 22:38:19.257998   13556 notify.go:220] Checking for updates...
	I0307 22:38:19.265195   13556 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 22:38:19.267893   13556 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 22:38:19.271071   13556 out.go:169] MINIKUBE_LOCATION=16214
	I0307 22:38:19.273648   13556 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0307 22:38:19.279684   13556 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 22:38:19.280461   13556 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:38:23.972182   13556 out.go:97] Using the hyperv driver based on user configuration
	I0307 22:38:23.979194   13556 start.go:297] selected driver: hyperv
	I0307 22:38:23.979194   13556 start.go:901] validating driver "hyperv" against <nil>
	I0307 22:38:23.979194   13556 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 22:38:24.025206   13556 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0307 22:38:24.026118   13556 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 22:38:24.026118   13556 cni.go:84] Creating CNI manager for ""
	I0307 22:38:24.026118   13556 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0307 22:38:24.026118   13556 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0307 22:38:24.026648   13556 start.go:340] cluster config:
	{Name:download-only-409000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I0307 22:38:24.026773   13556 iso.go:125] acquiring lock: {Name:mk41e0d38e058de906ab8df117c3158b3dc0e5b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:38:24.029739   13556 out.go:97] Starting "download-only-409000" primary control-plane node in "download-only-409000" cluster
	I0307 22:38:24.030347   13556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 22:38:24.076583   13556 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0307 22:38:24.076583   13556 cache.go:56] Caching tarball of preloaded images
	I0307 22:38:24.076583   13556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 22:38:24.079884   13556 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 22:38:24.079884   13556 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:38:24.150452   13556 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0307 22:38:27.797909   13556 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:38:27.798748   13556 preload.go:255] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0307 22:38:28.601463   13556 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0307 22:38:28.609284   13556 profile.go:142] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-409000\config.json ...
	I0307 22:38:28.609995   13556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-409000\config.json: {Name:mkb0f129bb7b9c050b4f03bd838387d503e2e292 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:38:28.611495   13556 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0307 22:38:28.612281   13556 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control-plane node download-only-409000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-409000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:38:29.783921   11736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2585795s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-409000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-409000: (1.2423802s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.24s)

                                                
                                    
x
+
TestBinaryMirror (6.13s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-201700 --alsologtostderr --binary-mirror http://127.0.0.1:54908 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-201700 --alsologtostderr --binary-mirror http://127.0.0.1:54908 --driver=hyperv: (5.304937s)
helpers_test.go:175: Cleaning up "binary-mirror-201700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-201700
--- PASS: TestBinaryMirror (6.13s)

                                                
                                    
x
+
TestOffline (381.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-463800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-463800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m32.4360665s)
helpers_test.go:175: Cleaning up "offline-docker-463800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-463800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-463800: (48.6116635s)
--- PASS: TestOffline (381.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-723800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-723800: exit status 85 (255.6046ms)

                                                
                                                
-- stdout --
	* Profile "addons-723800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:38:42.546060    8288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-723800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-723800: exit status 85 (266.8291ms)

                                                
                                                
-- stdout --
	* Profile "addons-723800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 22:38:42.554678    5400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (354.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-723800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-723800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m54.6100712s)
--- PASS: TestAddons/Setup (354.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-723800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-723800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-723800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f470c3cb-c0e5-405f-be8f-823ff3ed745b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f470c3cb-c0e5-405f-be8f-823ff3ed745b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0170894s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.5920409s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-723800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0307 22:45:12.978414   13412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-723800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 ip: (2.7518035s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.20.63.241
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable ingress-dns --alsologtostderr -v=1: (16.482086s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable ingress --alsologtostderr -v=1: (23.389471s)
--- PASS: TestAddons/parallel/Ingress (67.37s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z5nlt" [aa00a150-04ee-4ac5-9ff8-5e7a0a15699c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0249509s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-723800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-723800: (21.0577252s)
--- PASS: TestAddons/parallel/InspektorGadget (27.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 26.6592ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-5572t" [75d7cf2c-199d-4d5f-8105-f7b1bdb0812d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0116339s
addons_test.go:415: (dbg) Run:  kubectl --context addons-723800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable metrics-server --alsologtostderr -v=1: (15.3102048s)
--- PASS: TestAddons/parallel/MetricsServer (21.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (26.55s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 25.6686ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-k7wdr" [bd5da928-4e4f-4fc2-8cea-2a5c1602c03e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0162817s
addons_test.go:473: (dbg) Run:  kubectl --context addons-723800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-723800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.4114588s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable helm-tiller --alsologtostderr -v=1: (15.0693727s)
--- PASS: TestAddons/parallel/HelmTiller (26.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (101.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.9989ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-723800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-723800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0287c450-4164-417d-9e47-7ec4bd630841] Pending
helpers_test.go:344: "task-pv-pod" [0287c450-4164-417d-9e47-7ec4bd630841] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0287c450-4164-417d-9e47-7ec4bd630841] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.0125716s
addons_test.go:584: (dbg) Run:  kubectl --context addons-723800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-723800 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-723800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-723800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-723800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9ad33588-57e3-4106-8806-a8e7f783bd4f] Pending
helpers_test.go:344: "task-pv-pod-restore" [9ad33588-57e3-4106-8806-a8e7f783bd4f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9ad33588-57e3-4106-8806-a8e7f783bd4f] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0081416s
addons_test.go:626: (dbg) Run:  kubectl --context addons-723800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-723800 delete pod task-pv-pod-restore: (1.4304309s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-723800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-723800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (19.3970039s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable volumesnapshots --alsologtostderr -v=1: (13.4239347s)
--- PASS: TestAddons/parallel/CSI (101.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (34.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-723800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-723800 --alsologtostderr -v=1: (16.4700745s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-cc8hr" [56cc7e77-150c-4907-9023-ae2fdb02e3bd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-cc8hr" [56cc7e77-150c-4907-9023-ae2fdb02e3bd] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0192163s
--- PASS: TestAddons/parallel/Headlamp (34.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-c972l" [16e7797d-9308-47a9-bb06-1d8e2bbca5f8] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0235047s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-723800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-723800: (15.061873s)
--- PASS: TestAddons/parallel/CloudSpanner (21.10s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-723800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-723800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [73fedb2c-e68c-4465-92e4-e4b07e1f201d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [73fedb2c-e68c-4465-92e4-e4b07e1f201d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [73fedb2c-e68c-4465-92e4-e4b07e1f201d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0153796s
addons_test.go:891: (dbg) Run:  kubectl --context addons-723800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 ssh "cat /opt/local-path-provisioner/pvc-9400bf85-94ed-489b-a648-5551c6e089a1_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 ssh "cat /opt/local-path-provisioner/pvc-9400bf85-94ed-489b-a648-5551c6e089a1_default_test-pvc/file1": (10.4807903s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-723800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-723800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-723800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-723800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1189126s)
--- PASS: TestAddons/parallel/LocalPath (32.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.17s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wthv5" [143a4a10-8313-40ab-a7f6-613f980a9728] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0192319s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-723800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-723800: (16.14312s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.17s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-64vpb" [273b178e-df9d-4386-b519-2c89edbb2a23] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0120021s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-723800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-723800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (50.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-723800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-723800: (38.2243671s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-723800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-723800: (4.9496539s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-723800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-723800: (4.397602s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-723800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-723800: (2.5934317s)
--- PASS: TestAddons/StoppedEnableDisable (50.17s)

                                                
                                    
x
+
TestCertOptions (467.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-026900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0308 01:02:40.702480    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-026900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m42.7947276s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-026900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-026900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.4811374s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-026900 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-026900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-026900 -- "sudo cat /etc/kubernetes/admin.conf": (8.9356014s)
helpers_test.go:175: Cleaning up "cert-options-026900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-026900
E0308 01:09:37.413490    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-026900: (46.027424s)
--- PASS: TestCertOptions (467.39s)

                                                
                                    
x
+
TestCertExpiration (870.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-377500 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-377500 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m16.6680717s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-377500 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-377500 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m26.341824s)
helpers_test.go:175: Cleaning up "cert-expiration-377500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-377500
E0308 01:14:37.404396    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-377500: (47.1520137s)
--- PASS: TestCertExpiration (870.17s)

                                                
                                    
x
+
TestDockerFlags (532.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-128700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-128700 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (7m47.8287405s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-128700 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-128700 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.3656302s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-128700 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
E0308 01:04:37.403995    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-128700 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.4927578s)
helpers_test.go:175: Cleaning up "docker-flags-128700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-128700
E0308 01:04:58.907258    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-128700: (45.9273462s)
--- PASS: TestDockerFlags (532.62s)

                                                
                                    
x
+
TestForceSystemdFlag (242.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-463800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-463800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m5.541786s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-463800 ssh "docker info --format {{.CgroupDriver}}"
E0308 00:59:37.401503    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-463800 ssh "docker info --format {{.CgroupDriver}}": (9.5389429s)
helpers_test.go:175: Cleaning up "force-systemd-flag-463800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-463800
E0308 00:59:58.909116    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-463800: (47.3602206s)
--- PASS: TestForceSystemdFlag (242.44s)

                                                
                                    
x
+
TestForceSystemdEnv (407.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-642800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0308 01:14:58.921402    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-642800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m57.4696891s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-642800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-642800 ssh "docker info --format {{.CgroupDriver}}": (9.1480204s)
helpers_test.go:175: Cleaning up "force-systemd-env-642800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-642800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-642800: (40.4058176s)
--- PASS: TestForceSystemdEnv (407.03s)

                                                
                                    
x
+
TestErrorSpam/start (15.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run: (5.3434662s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run: (5.3452625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 start --dry-run: (5.2673543s)
--- PASS: TestErrorSpam/start (15.98s)

                                                
                                    
x
+
TestErrorSpam/status (32.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status: (11.2683498s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status: (10.7197711s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 status: (10.7136509s)
--- PASS: TestErrorSpam/status (32.74s)

                                                
                                    
x
+
TestErrorSpam/pause (20.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause: (6.9700183s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause: (6.6092834s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 pause: (6.5951594s)
--- PASS: TestErrorSpam/pause (20.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (20.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause: (6.7549459s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause: (6.6536555s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 unpause: (6.6559942s)
--- PASS: TestErrorSpam/unpause (20.08s)

                                                
                                    
x
+
TestErrorSpam/stop (53.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop: (30.1481418s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop
E0307 22:54:37.325040    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop: (10.1561072s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-267700 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-267700 stop: (13.0645586s)
--- PASS: TestErrorSpam/stop (53.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\8324\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (215.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-934300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-934300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m35.7009145s)
--- PASS: TestFunctional/serial/StartWithProxy (215.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (114.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-934300 --alsologtostderr -v=8
E0307 22:59:37.337314    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-934300 --alsologtostderr -v=8: (1m54.1335154s)
functional_test.go:659: soft start took 1m54.144377s for "functional-934300" cluster.
--- PASS: TestFunctional/serial/SoftStart (114.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-934300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:3.1: (8.1038048s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:3.3: (8.2737842s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cache add registry.k8s.io/pause:latest: (8.1102567s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-934300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local311722424\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-934300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local311722424\001: (1.7476683s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache add minikube-local-cache-test:functional-934300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cache add minikube-local-cache-test:functional-934300: (7.4497529s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache delete minikube-local-cache-test:functional-934300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-934300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl images: (8.6134805s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (33.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.52493s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.5855686s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:01:27.468618   10020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cache reload: (7.5202297s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.5366097s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (33.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 kubectl -- --context functional-934300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (116.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-934300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-934300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m56.7743875s)
functional_test.go:757: restart took 1m56.7750883s for "functional-934300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (116.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-934300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 logs: (8.0849373s)
--- PASS: TestFunctional/serial/LogsCmd (8.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3598511872\001\logs.txt
E0307 23:04:37.338626    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3598511872\001\logs.txt: (9.8772881s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-934300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-934300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-934300: exit status 115 (15.4785299s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.20.58.27:30566 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:04:42.245509    4224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-934300 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (19.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (38.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 status: (13.0466011s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.5660576s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 status -o json: (12.9583204s)
--- PASS: TestFunctional/parallel/StatusCmd (38.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-934300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-934300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ghw9n" [1762e16f-60c0-4293-bbdf-7aa2391fd98c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ghw9n" [1762e16f-60c0-4293-bbdf-7aa2391fd98c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0229423s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 service hello-node-connect --url: (16.4231076s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.20.58.27:30516
functional_test.go:1671: http://172.20.58.27:30516: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ghw9n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.20.58.27:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.20.58.27:30516
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c743467f-e104-4404-b662-be573f6ec4a0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0109538s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-934300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-934300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-934300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-934300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [056a653e-447b-45f9-8be6-921bf8c7905f] Pending
helpers_test.go:344: "sp-pod" [056a653e-447b-45f9-8be6-921bf8c7905f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [056a653e-447b-45f9-8be6-921bf8c7905f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.0231987s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-934300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-934300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-934300 delete -f testdata/storage-provisioner/pod.yaml: (1.2489849s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-934300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [86f76e86-9978-44e2-b909-84bad8065900] Pending
helpers_test.go:344: "sp-pod" [86f76e86-9978-44e2-b909-84bad8065900] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [86f76e86-9978-44e2-b909-84bad8065900] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0083724s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-934300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (17.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "echo hello": (8.8989212s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "cat /etc/hostname": (8.4426898s)
--- PASS: TestFunctional/parallel/SSHCmd (17.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (55.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0346741s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /home/docker/cp-test.txt": (9.8526573s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cp functional-934300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd4050312373\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cp functional-934300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd4050312373\001\cp-test.txt: (9.8351506s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /home/docker/cp-test.txt": (10.0360064s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5727973s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh -n functional-934300 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.4829348s)
--- PASS: TestFunctional/parallel/CpCmd (55.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (68.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-934300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-zw4pp" [831a734e-2bb5-484c-ad96-7a8b868d3a1e] Pending
helpers_test.go:344: "mysql-859648c796-zw4pp" [831a734e-2bb5-484c-ad96-7a8b868d3a1e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-zw4pp" [831a734e-2bb5-484c-ad96-7a8b868d3a1e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 44.0173917s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (478.5895ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (301.3845ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (293.5068ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (281.7232ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (264.5923ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;": exit status 1 (284.6824ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-934300 exec mysql-859648c796-zw4pp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (68.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/8324/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/test/nested/copy/8324/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/test/nested/copy/8324/hosts": (9.5521757s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (59.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/8324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/8324.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/8324.pem": (10.8411553s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/8324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /usr/share/ca-certificates/8324.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /usr/share/ca-certificates/8324.pem": (9.2058953s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.9386104s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/83242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/83242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/83242.pem": (9.5929077s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/83242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /usr/share/ca-certificates/83242.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /usr/share/ca-certificates/83242.pem": (10.2394744s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.8610777s)
--- PASS: TestFunctional/parallel/CertSync (59.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-934300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 ssh "sudo systemctl is-active crio": exit status 1 (10.3944843s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:05:40.817706    9272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.832278s)
--- PASS: TestFunctional/parallel/License (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-934300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-934300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wnjdb" [97372c93-f9bc-4e7a-b975-f66f241ded92] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wnjdb" [97372c93-f9bc-4e7a-b975-f66f241ded92] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0210408s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.7441138s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.885078s)
functional_test.go:1311: Took "9.8852764s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "310.0867ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 service list: (12.9979692s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.4809612s)
functional_test.go:1362: Took "10.4809612s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "253.3368ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 service list -o json: (13.1258691s)
functional_test.go:1490: Took "13.1258691s" to run "out/minikube-windows-amd64.exe -p functional-934300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 version -o=json --components: (7.294372s)
--- PASS: TestFunctional/parallel/Version/components (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls --format short --alsologtostderr: (7.0561864s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-934300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-934300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-934300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-934300 image ls --format short --alsologtostderr:
W0307 23:08:35.483645   13024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:08:35.573721   13024 out.go:291] Setting OutFile to fd 704 ...
I0307 23:08:35.574438   13024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:35.574438   13024 out.go:304] Setting ErrFile to fd 924...
I0307 23:08:35.574438   13024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:35.595094   13024 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:35.595347   13024 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:35.596466   13024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:37.674039   13024 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:37.674039   13024 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:37.685155   13024 ssh_runner.go:195] Run: systemctl --version
I0307 23:08:37.685155   13024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:39.746125   13024 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:39.757238   13024 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:39.757238   13024 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
I0307 23:08:42.261548   13024 main.go:141] libmachine: [stdout =====>] : 172.20.58.27

                                                
                                                
I0307 23:08:42.263087   13024 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:42.263443   13024 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
I0307 23:08:42.369660   13024 ssh_runner.go:235] Completed: systemctl --version: (4.6843956s)
I0307 23:08:42.379458   13024 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls --format table --alsologtostderr: (6.8765256s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-934300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-934300 | ff4e8b3740a99 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/google-containers/addon-resizer      | functional-934300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-934300 image ls --format table --alsologtostderr:
W0307 23:08:47.062159   13676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:08:47.161225   13676 out.go:291] Setting OutFile to fd 1008 ...
I0307 23:08:47.174000   13676 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:47.174000   13676 out.go:304] Setting ErrFile to fd 828...
I0307 23:08:47.174097   13676 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:47.189366   13676 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:47.190602   13676 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:47.190860   13676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:49.307311   13676 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:49.307396   13676 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:49.320325   13676 ssh_runner.go:195] Run: systemctl --version
I0307 23:08:49.320325   13676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:51.365604   13676 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:51.365604   13676 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:51.375622   13676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
I0307 23:08:53.636724   13676 main.go:141] libmachine: [stdout =====>] : 172.20.58.27

                                                
                                                
I0307 23:08:53.636724   13676 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:53.647737   13676 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
I0307 23:08:53.746673   13676 ssh_runner.go:235] Completed: systemctl --version: (4.426227s)
I0307 23:08:53.755957   13676 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls --format json --alsologtostderr: (7.1440227s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-934300 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0f
be50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-934300"],"size":"32900000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/n
ginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ff4e8b3740a99cad1792a8d63b472ddd03ea3faa1932576f153d10b69651a491","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-934300"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-934300 image ls --format json --alsologtostderr:
W0307 23:08:43.786287    9240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:08:43.872552    9240 out.go:291] Setting OutFile to fd 956 ...
I0307 23:08:43.876775    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:43.876775    9240 out.go:304] Setting ErrFile to fd 864...
I0307 23:08:43.876775    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:43.895262    9240 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:43.895614    9240 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:43.895989    9240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:45.983695    9240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:45.983921    9240 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:45.996574    9240 ssh_runner.go:195] Run: systemctl --version
I0307 23:08:45.996574    9240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:48.123622    9240 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:48.134021    9240 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:48.134300    9240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
I0307 23:08:50.590413    9240 main.go:141] libmachine: [stdout =====>] : 172.20.58.27

                                                
                                                
I0307 23:08:50.596844    9240 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:50.597503    9240 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
I0307 23:08:50.707035    9240 ssh_runner.go:235] Completed: systemctl --version: (4.7104171s)
I0307 23:08:50.720481    9240 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls --format yaml --alsologtostderr: (7.2010686s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-934300 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-934300
size: "32900000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ff4e8b3740a99cad1792a8d63b472ddd03ea3faa1932576f153d10b69651a491
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-934300
size: "30"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-934300 image ls --format yaml --alsologtostderr:
W0307 23:08:36.571663   10392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:08:36.656656   10392 out.go:291] Setting OutFile to fd 700 ...
I0307 23:08:36.669540   10392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:36.669540   10392 out.go:304] Setting ErrFile to fd 904...
I0307 23:08:36.669540   10392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:36.682996   10392 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:36.682996   10392 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:36.683718   10392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:38.818338   10392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:38.818338   10392 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:38.829672   10392 ssh_runner.go:195] Run: systemctl --version
I0307 23:08:38.829672   10392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:40.927214   10392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:40.935170   10392 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:40.935170   10392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
I0307 23:08:43.463573   10392 main.go:141] libmachine: [stdout =====>] : 172.20.58.27

                                                
                                                
I0307 23:08:43.464531   10392 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:43.465076   10392 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
I0307 23:08:43.584978   10392 ssh_runner.go:235] Completed: systemctl --version: (4.7552078s)
I0307 23:08:43.595725   10392 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (24.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-934300 ssh pgrep buildkitd: exit status 1 (9.1236479s)

                                                
                                                
** stderr ** 
	W0307 23:08:42.559666     272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image build -t localhost/my-image:functional-934300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image build -t localhost/my-image:functional-934300 testdata\build --alsologtostderr: (8.5713041s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-934300 image build -t localhost/my-image:functional-934300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 4505513e6d79
Removing intermediate container 4505513e6d79
---> 8d4151da3b28
Step 3/3 : ADD content.txt /
---> f4ac571d8204
Successfully built f4ac571d8204
Successfully tagged localhost/my-image:functional-934300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-934300 image build -t localhost/my-image:functional-934300 testdata\build --alsologtostderr:
W0307 23:08:51.680898   12884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0307 23:08:51.766785   12884 out.go:291] Setting OutFile to fd 956 ...
I0307 23:08:51.781517   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:51.781517   12884 out.go:304] Setting ErrFile to fd 408...
I0307 23:08:51.781517   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 23:08:51.799602   12884 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:51.814015   12884 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0307 23:08:51.814302   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:53.772813   12884 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:53.772844   12884 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:53.784382   12884 ssh_runner.go:195] Run: systemctl --version
I0307 23:08:53.785455   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-934300 ).state
I0307 23:08:55.670330   12884 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0307 23:08:55.670424   12884 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:55.670568   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-934300 ).networkadapters[0]).ipaddresses[0]
I0307 23:08:57.910647   12884 main.go:141] libmachine: [stdout =====>] : 172.20.58.27

                                                
                                                
I0307 23:08:57.921343   12884 main.go:141] libmachine: [stderr =====>] : 
I0307 23:08:57.921742   12884 sshutil.go:53] new ssh client: &{IP:172.20.58.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-934300\id_rsa Username:docker}
I0307 23:08:58.035236   12884 ssh_runner.go:235] Completed: systemctl --version: (4.2508144s)
I0307 23:08:58.035298   12884 build_images.go:161] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.223241875.tar
I0307 23:08:58.048387   12884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 23:08:58.082316   12884 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.223241875.tar
I0307 23:08:58.088758   12884 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.223241875.tar: stat -c "%s %y" /var/lib/minikube/build/build.223241875.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.223241875.tar': No such file or directory
I0307 23:08:58.089014   12884 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.223241875.tar --> /var/lib/minikube/build/build.223241875.tar (3072 bytes)
I0307 23:08:58.143841   12884 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.223241875
I0307 23:08:58.169342   12884 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.223241875 -xf /var/lib/minikube/build/build.223241875.tar
I0307 23:08:58.184193   12884 docker.go:360] Building image: /var/lib/minikube/build/build.223241875
I0307 23:08:58.193020   12884 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-934300 /var/lib/minikube/build/build.223241875
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0307 23:09:00.054257   12884 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-934300 /var/lib/minikube/build/build.223241875: (1.8610833s)
I0307 23:09:00.065666   12884 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.223241875
I0307 23:09:00.093510   12884 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.223241875.tar
I0307 23:09:00.109998   12884 build_images.go:217] Built localhost/my-image:functional-934300 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.223241875.tar
I0307 23:09:00.109998   12884 build_images.go:133] succeeded building to: functional-934300
I0307 23:09:00.109998   12884 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (6.3681018s)
E0307 23:09:37.335198    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (24.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.7736687s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-934300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr: (15.004237s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (7.2855972s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr: (11.7496264s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (7.1296212s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.6395691s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-934300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image load --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr: (14.4246291s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (7.4038827s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (25.69s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (40.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-934300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-934300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-934300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-934300": (26.4692157s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-934300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-934300 docker-env | Invoke-Expression ; docker images": (13.6123252s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (40.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image save gcr.io/google-containers/addon-resizer:functional-934300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image save gcr.io/google-containers/addon-resizer:functional-934300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.3522496s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2: (2.2528787s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2: (2.3955147s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 update-context --alsologtostderr -v=2: (2.3960109s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image rm gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image rm gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr: (7.6533842s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (7.0719129s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.0790236s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image ls: (6.6614524s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (16.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-934300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-934300 image save --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-934300 image save --daemon gcr.io/google-containers/addon-resizer:functional-934300 --alsologtostderr: (8.6151936s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-934300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4040: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4556: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-934300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bda86936-ce59-4b0e-8300-efc35cb872b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bda86936-ce59-4b0e-8300-efc35cb872b5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.0110484s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-934300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2616: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-934300
--- PASS: TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-934300
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-934300
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (667.32s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-792400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0307 23:14:37.348297    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 23:14:58.841990    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:58.858412    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:58.873547    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:58.907736    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:58.951056    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:59.040369    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:59.211099    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:14:59.537652    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:00.190950    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:01.480696    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:04.043662    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:09.171908    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:19.422202    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:15:39.915903    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:16:20.878066    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:17:42.802423    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:19:37.348457    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 23:19:58.852334    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:20:26.658499    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-792400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m32.8602515s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
E0307 23:22:40.567810    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: (34.4613308s)
--- PASS: TestMutliControlPlane/serial/StartCluster (667.32s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (12.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- rollout status deployment/busybox: (3.7240919s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- nslookup kubernetes.io: (1.8786313s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- nslookup kubernetes.io: (1.6492546s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-8vztn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-dswbq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-792400 -- exec busybox-5b5d89c9d6-wmtt9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (12.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (235.94s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-792400 -v=7 --alsologtostderr
E0307 23:24:37.343694    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 23:24:58.856269    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-792400 -v=7 --alsologtostderr: (3m10.2659977s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: (45.6690937s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (235.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-792400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (26.91s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (26.9086295s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (26.91s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (592.69s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 status --output json -v=7 --alsologtostderr: (45.6132307s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400:/home/docker/cp-test.txt: (9.1146618s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt": (9.060097s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400.txt
E0307 23:29:37.352525    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400.txt: (9.0888234s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt": (9.1000287s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400_ha-792400-m02.txt
E0307 23:29:58.858867    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400_ha-792400-m02.txt: (15.758274s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt": (9.028059s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m02.txt": (9.115261s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400_ha-792400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400_ha-792400-m03.txt: (15.8090818s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt": (9.1413921s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m03.txt": (9.0531281s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400_ha-792400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400_ha-792400-m04.txt: (15.6736091s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt"
E0307 23:31:22.034010    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test.txt": (8.9846161s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400_ha-792400-m04.txt": (9.0456057s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m02:/home/docker/cp-test.txt: (9.0361367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt": (9.0547043s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m02.txt: (9.0575757s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt": (9.0583269s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m02_ha-792400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m02_ha-792400.txt: (15.9398261s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt": (9.1101648s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400.txt": (9.1061497s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400-m02_ha-792400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400-m02_ha-792400-m03.txt: (16.0630782s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt": (9.0506273s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400-m03.txt": (9.0927228s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400-m02_ha-792400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m02:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400-m02_ha-792400-m04.txt: (15.7374218s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test.txt": (8.9862786s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400-m02_ha-792400-m04.txt": (8.9830778s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m03:/home/docker/cp-test.txt: (9.1229385s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt": (9.1148984s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m03.txt: (9.0998763s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt": (9.1142929s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m03_ha-792400.txt
E0307 23:34:37.359198    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m03_ha-792400.txt: (15.674914s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt": (9.033431s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400.txt"
E0307 23:34:58.862027    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400.txt": (9.0657757s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt: (15.6240445s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt": (9.0356854s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400-m02.txt": (9.019976s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m03:/home/docker/cp-test.txt ha-792400-m04:/home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt: (15.6546173s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test.txt": (8.8237013s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test_ha-792400-m03_ha-792400-m04.txt": (8.7540053s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp testdata\cp-test.txt ha-792400-m04:/home/docker/cp-test.txt: (8.6842274s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt": (8.7392894s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMutliControlPlaneserialCopyFile463807614\001\cp-test_ha-792400-m04.txt: (8.8407683s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt": (8.6866824s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m04_ha-792400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400:/home/docker/cp-test_ha-792400-m04_ha-792400.txt: (15.4164633s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt": (8.7751704s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400.txt": (8.7691107s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400-m02:/home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt: (15.2724225s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt": (8.83906s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m02 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400-m02.txt": (8.7829721s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 cp ha-792400-m04:/home/docker/cp-test.txt ha-792400-m03:/home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt: (15.2657908s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m04 "sudo cat /home/docker/cp-test.txt": (8.7496994s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 ssh -n ha-792400-m03 "sudo cat /home/docker/cp-test_ha-792400-m04_ha-792400-m03.txt": (8.7684931s)
--- PASS: TestMutliControlPlane/serial/CopyFile (592.69s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (67.49s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-792400 node stop m02 -v=7 --alsologtostderr: (32.4898433s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr
E0307 23:39:20.590092    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-792400 status -v=7 --alsologtostderr: exit status 7 (34.9934315s)

                                                
                                                
-- stdout --
	ha-792400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:38:52.959713    4604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0307 23:38:53.041424    4604 out.go:291] Setting OutFile to fd 588 ...
	I0307 23:38:53.041652    4604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:38:53.042256    4604 out.go:304] Setting ErrFile to fd 692...
	I0307 23:38:53.042256    4604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:38:53.056090    4604 out.go:298] Setting JSON to false
	I0307 23:38:53.056090    4604 mustload.go:65] Loading cluster: ha-792400
	I0307 23:38:53.056090    4604 notify.go:220] Checking for updates...
	I0307 23:38:53.057025    4604 config.go:182] Loaded profile config "ha-792400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:38:53.057025    4604 status.go:255] checking status of ha-792400 ...
	I0307 23:38:53.057737    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:38:55.106000    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:38:55.106000    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:38:55.106000    4604 status.go:330] ha-792400 host status = "Running" (err=<nil>)
	I0307 23:38:55.106000    4604 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:38:55.119283    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:38:57.207362    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:38:57.207424    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:38:57.207424    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:38:59.665348    4604 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:38:59.665348    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:38:59.665505    4604 host.go:66] Checking if "ha-792400" exists ...
	I0307 23:38:59.677470    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 23:38:59.677470    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400 ).state
	I0307 23:39:01.666113    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:01.677556    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:01.677556    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:04.079336    4604 main.go:141] libmachine: [stdout =====>] : 172.20.58.169
	
	I0307 23:39:04.079396    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:04.079396    4604 sshutil.go:53] new ssh client: &{IP:172.20.58.169 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400\id_rsa Username:docker}
	I0307 23:39:04.186292    4604 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5087244s)
	I0307 23:39:04.198072    4604 ssh_runner.go:195] Run: systemctl --version
	I0307 23:39:04.222549    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:39:04.243979    4604 kubeconfig.go:125] found "ha-792400" server: "https://172.20.63.254:8443"
	I0307 23:39:04.244145    4604 api_server.go:166] Checking apiserver status ...
	I0307 23:39:04.252930    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:39:04.294592    4604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2184/cgroup
	W0307 23:39:04.313484    4604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0307 23:39:04.326742    4604 ssh_runner.go:195] Run: ls
	I0307 23:39:04.329972    4604 api_server.go:253] Checking apiserver healthz at https://172.20.63.254:8443/healthz ...
	I0307 23:39:04.335900    4604 api_server.go:279] https://172.20.63.254:8443/healthz returned 200:
	ok
	I0307 23:39:04.335900    4604 status.go:422] ha-792400 apiserver status = Running (err=<nil>)
	I0307 23:39:04.335900    4604 status.go:257] ha-792400 status: &{Name:ha-792400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 23:39:04.335900    4604 status.go:255] checking status of ha-792400-m02 ...
	I0307 23:39:04.343928    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m02 ).state
	I0307 23:39:06.373014    4604 main.go:141] libmachine: [stdout =====>] : Off
	
	I0307 23:39:06.373181    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:06.373181    4604 status.go:330] ha-792400-m02 host status = "Stopped" (err=<nil>)
	I0307 23:39:06.373181    4604 status.go:343] host is not running, skipping remaining checks
	I0307 23:39:06.373181    4604 status.go:257] ha-792400-m02 status: &{Name:ha-792400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 23:39:06.373325    4604 status.go:255] checking status of ha-792400-m03 ...
	I0307 23:39:06.373422    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:39:08.371037    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:08.371037    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:08.371275    4604 status.go:330] ha-792400-m03 host status = "Running" (err=<nil>)
	I0307 23:39:08.371275    4604 host.go:66] Checking if "ha-792400-m03" exists ...
	I0307 23:39:08.371482    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:39:10.314799    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:10.314799    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:10.314965    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:12.669654    4604 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:39:12.674372    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:12.674462    4604 host.go:66] Checking if "ha-792400-m03" exists ...
	I0307 23:39:12.686786    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 23:39:12.686786    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m03 ).state
	I0307 23:39:14.611508    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:14.611508    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:14.615467    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m03 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:16.949752    4604 main.go:141] libmachine: [stdout =====>] : 172.20.59.36
	
	I0307 23:39:16.949752    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:16.950290    4604 sshutil.go:53] new ssh client: &{IP:172.20.59.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m03\id_rsa Username:docker}
	I0307 23:39:17.041318    4604 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3544906s)
	I0307 23:39:17.053290    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:39:17.075371    4604 kubeconfig.go:125] found "ha-792400" server: "https://172.20.63.254:8443"
	I0307 23:39:17.075371    4604 api_server.go:166] Checking apiserver status ...
	I0307 23:39:17.087031    4604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 23:39:17.122636    4604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2493/cgroup
	W0307 23:39:17.137812    4604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2493/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0307 23:39:17.148201    4604 ssh_runner.go:195] Run: ls
	I0307 23:39:17.155790    4604 api_server.go:253] Checking apiserver healthz at https://172.20.63.254:8443/healthz ...
	I0307 23:39:17.162329    4604 api_server.go:279] https://172.20.63.254:8443/healthz returned 200:
	ok
	I0307 23:39:17.162329    4604 status.go:422] ha-792400-m03 apiserver status = Running (err=<nil>)
	I0307 23:39:17.162329    4604 status.go:257] ha-792400-m03 status: &{Name:ha-792400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 23:39:17.163052    4604 status.go:255] checking status of ha-792400-m04 ...
	I0307 23:39:17.163495    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m04 ).state
	I0307 23:39:19.123103    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:19.123283    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:19.123283    4604 status.go:330] ha-792400-m04 host status = "Running" (err=<nil>)
	I0307 23:39:19.123283    4604 host.go:66] Checking if "ha-792400-m04" exists ...
	I0307 23:39:19.123816    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m04 ).state
	I0307 23:39:21.115358    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:21.115358    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:21.115595    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m04 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:23.440543    4604 main.go:141] libmachine: [stdout =====>] : 172.20.57.78
	
	I0307 23:39:23.440543    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:23.440634    4604 host.go:66] Checking if "ha-792400-m04" exists ...
	I0307 23:39:23.452209    4604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 23:39:23.452209    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-792400-m04 ).state
	I0307 23:39:25.372368    4604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0307 23:39:25.372638    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:25.372638    4604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-792400-m04 ).networkadapters[0]).ipaddresses[0]
	I0307 23:39:27.685227    4604 main.go:141] libmachine: [stdout =====>] : 172.20.57.78
	
	I0307 23:39:27.685227    4604 main.go:141] libmachine: [stderr =====>] : 
	I0307 23:39:27.685895    4604 sshutil.go:53] new ssh client: &{IP:172.20.57.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ha-792400-m04\id_rsa Username:docker}
	I0307 23:39:27.772379    4604 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.32013s)
	I0307 23:39:27.785480    4604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 23:39:27.808370    4604 status.go:257] ha-792400-m04 status: &{Name:ha-792400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (67.49s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (19.31s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0307 23:39:37.359669    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (19.3057303s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (19.31s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (180.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-304200 --driver=hyperv
E0307 23:48:02.048294    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-304200 --driver=hyperv: (3m0.9620381s)
--- PASS: TestImageBuild/serial/Setup (180.96s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-304200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-304200: (8.9899308s)
--- PASS: TestImageBuild/serial/NormalBuild (8.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-304200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-304200: (8.1292096s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-304200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-304200: (7.1546875s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-304200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-304200: (6.9981164s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (223.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-050000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0307 23:49:58.863103    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-050000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m43.9658317s)
--- PASS: TestJSONOutput/start/Command (223.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-050000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-050000 --output=json --user=testUser: (7.3319955s)
--- PASS: TestJSONOutput/pause/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.02s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-050000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-050000 --output=json --user=testUser: (7.0178447s)
--- PASS: TestJSONOutput/unpause/Command (7.02s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (31.55s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-050000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-050000 --output=json --user=testUser: (31.5538524s)
--- PASS: TestJSONOutput/stop/Command (31.55s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-768500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-768500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (263.5401ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"73eaea78-b0a3-41e2-ab3c-c4bd3381c6a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-768500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"43bdf137-b740-4d8b-93dc-cb8ca69a8be4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"90c17686-7b6a-4ac1-a6b2-4595868eabb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43fb8e3a-146c-4edb-9d18-376472090645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0be34a8c-3ba6-4f39-8259-ada415c71046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16214"}}
	{"specversion":"1.0","id":"fce9e134-5cc3-4f29-be2a-fbd6741cefde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0af1d0ee-543b-4a2e-afcc-094187c69fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:54:28.217775   12864 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-768500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-768500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-768500: (1.0878066s)
--- PASS: TestErrorJSONOutput (1.35s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (487.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-315200 --driver=hyperv
E0307 23:54:37.371578    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 23:54:58.867380    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0307 23:56:00.604744    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-315200 --driver=hyperv: (2m58.7501126s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-315200 --driver=hyperv
E0307 23:59:37.364525    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0307 23:59:58.872047    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-315200 --driver=hyperv: (3m4.1892277s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-315200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.5536567s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-315200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.2814284s)
helpers_test.go:175: Cleaning up "second-315200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-315200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-315200: (44.8386703s)
helpers_test.go:175: Cleaning up "first-315200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-315200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-315200: (39.9280176s)
--- PASS: TestMinikubeProfile (487.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (133.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-590900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0308 00:04:37.377250    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:04:42.065128    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-590900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m12.6373252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (133.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-590900 ssh -- ls /minikube-host
E0308 00:04:58.881497    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-590900 ssh -- ls /minikube-host: (8.4120047s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (137.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-590900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-590900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m16.1874055s)
--- PASS: TestMountStart/serial/StartWithMountSecond (137.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host: (8.7474601s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.75s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (28.35s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-590900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-590900 --alsologtostderr -v=5: (28.3539118s)
--- PASS: TestMountStart/serial/DeleteFirst (28.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host: (8.5446516s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.55s)

                                                
                                    
x
+
TestMountStart/serial/Stop (24.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-590900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-590900: (24.2389228s)
--- PASS: TestMountStart/serial/Stop (24.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (105.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-590900
E0308 00:09:37.373400    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:09:58.876270    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-590900: (1m44.7862361s)
--- PASS: TestMountStart/serial/RestartStopped (105.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-590900 ssh -- ls /minikube-host: (8.7340909s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.73s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (386.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-397400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0308 00:12:40.626667    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:14:37.383905    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:14:58.884703    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-397400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m4.8266256s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr: (21.7457057s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (386.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- rollout status deployment/busybox: (3.205939s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- nslookup kubernetes.io: (1.9632643s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-ctt42 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-397400 -- exec busybox-5b5d89c9d6-j7ck4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (203.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-397400 -v 3 --alsologtostderr
E0308 00:19:37.382896    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:19:58.886225    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-397400 -v 3 --alsologtostderr: (2m51.3932376s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr
E0308 00:21:22.081857    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr: (32.5553993s)
--- PASS: TestMultiNode/serial/AddNode (203.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-397400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.8652517s)
--- PASS: TestMultiNode/serial/ProfileList (10.87s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (327.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 status --output json --alsologtostderr: (32.9065338s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400:/home/docker/cp-test.txt: (8.6791347s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt": (8.8018121s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400.txt: (8.6355357s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt": (8.6954283s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt multinode-397400-m02:/home/docker/cp-test_multinode-397400_multinode-397400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt multinode-397400-m02:/home/docker/cp-test_multinode-397400_multinode-397400-m02.txt: (15.0570124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt": (8.6429995s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test_multinode-397400_multinode-397400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test_multinode-397400_multinode-397400-m02.txt": (8.6620711s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt multinode-397400-m03:/home/docker/cp-test_multinode-397400_multinode-397400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400:/home/docker/cp-test.txt multinode-397400-m03:/home/docker/cp-test_multinode-397400_multinode-397400-m03.txt: (15.0921667s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test.txt": (8.6376044s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test_multinode-397400_multinode-397400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test_multinode-397400_multinode-397400-m03.txt": (8.6555787s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400-m02:/home/docker/cp-test.txt: (8.7797138s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt": (8.7656245s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m02.txt: (8.7171775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt"
E0308 00:24:37.383708    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt": (8.7187217s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt multinode-397400:/home/docker/cp-test_multinode-397400-m02_multinode-397400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt multinode-397400:/home/docker/cp-test_multinode-397400-m02_multinode-397400.txt: (15.03559s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt"
E0308 00:24:58.894611    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt": (8.756872s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test_multinode-397400-m02_multinode-397400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test_multinode-397400-m02_multinode-397400.txt": (8.6072206s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt multinode-397400-m03:/home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m02:/home/docker/cp-test.txt multinode-397400-m03:/home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt: (15.0634123s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test.txt": (8.6062958s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test_multinode-397400-m02_multinode-397400-m03.txt": (8.491606s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp testdata\cp-test.txt multinode-397400-m03:/home/docker/cp-test.txt: (8.3067123s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt": (8.3537763s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile1220590344\001\cp-test_multinode-397400-m03.txt: (8.4542865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt": (8.4241609s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt multinode-397400:/home/docker/cp-test_multinode-397400-m03_multinode-397400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt multinode-397400:/home/docker/cp-test_multinode-397400-m03_multinode-397400.txt: (14.466035s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt": (8.2890726s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test_multinode-397400-m03_multinode-397400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400 "sudo cat /home/docker/cp-test_multinode-397400-m03_multinode-397400.txt": (8.3553116s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt multinode-397400-m02:/home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 cp multinode-397400-m03:/home/docker/cp-test.txt multinode-397400-m02:/home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt: (14.5347985s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m03 "sudo cat /home/docker/cp-test.txt": (8.2799511s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 ssh -n multinode-397400-m02 "sudo cat /home/docker/cp-test_multinode-397400-m03_multinode-397400-m02.txt": (8.3204162s)
--- PASS: TestMultiNode/serial/CopyFile (327.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (67.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 node stop m03: (21.454231s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-397400 status: exit status 7 (22.8867833s)

                                                
                                                
-- stdout --
	multinode-397400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:27:42.636991    3860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr: exit status 7 (22.7107495s)

                                                
                                                
-- stdout --
	multinode-397400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:28:05.512539    8380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0308 00:28:05.595082    8380 out.go:291] Setting OutFile to fd 616 ...
	I0308 00:28:05.601850    8380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:28:05.601850    8380 out.go:304] Setting ErrFile to fd 932...
	I0308 00:28:05.601850    8380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0308 00:28:05.617356    8380 out.go:298] Setting JSON to false
	I0308 00:28:05.617537    8380 mustload.go:65] Loading cluster: multinode-397400
	I0308 00:28:05.617537    8380 notify.go:220] Checking for updates...
	I0308 00:28:05.618292    8380 config.go:182] Loaded profile config "multinode-397400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0308 00:28:05.618292    8380 status.go:255] checking status of multinode-397400 ...
	I0308 00:28:05.618292    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:28:07.527192    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:07.538316    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:07.538316    8380 status.go:330] multinode-397400 host status = "Running" (err=<nil>)
	I0308 00:28:07.538316    8380 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:28:07.538949    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:28:09.430385    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:09.440782    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:09.440920    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:28:11.712509    8380 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:28:11.712509    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:11.712629    8380 host.go:66] Checking if "multinode-397400" exists ...
	I0308 00:28:11.725576    8380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 00:28:11.725576    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400 ).state
	I0308 00:28:13.578490    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:13.578490    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:13.578490    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400 ).networkadapters[0]).ipaddresses[0]
	I0308 00:28:15.795401    8380 main.go:141] libmachine: [stdout =====>] : 172.20.48.212
	
	I0308 00:28:15.795401    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:15.796069    8380 sshutil.go:53] new ssh client: &{IP:172.20.48.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400\id_rsa Username:docker}
	I0308 00:28:15.890846    8380 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1652305s)
	I0308 00:28:15.901287    8380 ssh_runner.go:195] Run: systemctl --version
	I0308 00:28:15.919084    8380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:28:15.940346    8380 kubeconfig.go:125] found "multinode-397400" server: "https://172.20.48.212:8443"
	I0308 00:28:15.940346    8380 api_server.go:166] Checking apiserver status ...
	I0308 00:28:15.954007    8380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0308 00:28:15.988251    8380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2157/cgroup
	W0308 00:28:16.011139    8380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2157/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0308 00:28:16.021702    8380 ssh_runner.go:195] Run: ls
	I0308 00:28:16.028685    8380 api_server.go:253] Checking apiserver healthz at https://172.20.48.212:8443/healthz ...
	I0308 00:28:16.036633    8380 api_server.go:279] https://172.20.48.212:8443/healthz returned 200:
	ok
	I0308 00:28:16.036633    8380 status.go:422] multinode-397400 apiserver status = Running (err=<nil>)
	I0308 00:28:16.036633    8380 status.go:257] multinode-397400 status: &{Name:multinode-397400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0308 00:28:16.036918    8380 status.go:255] checking status of multinode-397400-m02 ...
	I0308 00:28:16.037725    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:28:17.886267    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:17.886267    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:17.886267    8380 status.go:330] multinode-397400-m02 host status = "Running" (err=<nil>)
	I0308 00:28:17.886267    8380 host.go:66] Checking if "multinode-397400-m02" exists ...
	I0308 00:28:17.897692    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:28:19.762592    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:19.762592    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:19.772715    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:28:21.970072    8380 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:28:21.970072    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:21.981073    8380 host.go:66] Checking if "multinode-397400-m02" exists ...
	I0308 00:28:21.993509    8380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0308 00:28:21.993509    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m02 ).state
	I0308 00:28:23.858200    8380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0308 00:28:23.858200    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:23.868377    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-397400-m02 ).networkadapters[0]).ipaddresses[0]
	I0308 00:28:26.095309    8380 main.go:141] libmachine: [stdout =====>] : 172.20.61.226
	
	I0308 00:28:26.095309    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:26.095658    8380 sshutil.go:53] new ssh client: &{IP:172.20.61.226 Port:22 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-397400-m02\id_rsa Username:docker}
	I0308 00:28:26.188786    8380 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1952381s)
	I0308 00:28:26.199943    8380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0308 00:28:26.222456    8380 status.go:257] multinode-397400-m02 status: &{Name:multinode-397400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0308 00:28:26.222456    8380 status.go:255] checking status of multinode-397400-m03 ...
	I0308 00:28:26.223198    8380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-397400-m03 ).state
	I0308 00:28:28.092875    8380 main.go:141] libmachine: [stdout =====>] : Off
	
	I0308 00:28:28.092875    8380 main.go:141] libmachine: [stderr =====>] : 
	I0308 00:28:28.092983    8380 status.go:330] multinode-397400-m03 host status = "Stopped" (err=<nil>)
	I0308 00:28:28.092983    8380 status.go:343] host is not running, skipping remaining checks
	I0308 00:28:28.092983    8380 status.go:257] multinode-397400-m03 status: &{Name:multinode-397400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (67.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (160.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 node start m03 -v=7 --alsologtostderr
E0308 00:29:20.666725    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:29:37.392627    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:29:58.885851    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 node start m03 -v=7 --alsologtostderr: (2m9.1297339s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 status -v=7 --alsologtostderr: (31.261078s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (160.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (62.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 node delete m03
E0308 00:39:58.898415    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 node delete m03: (41.6861304s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-397400 status --alsologtostderr: (20.8585865s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (62.90s)

                                                
                                    
x
+
TestPreload (484.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-559300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0308 00:44:37.394658    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:44:58.895512    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
E0308 00:46:00.678069    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-559300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m0.6264657s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-559300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-559300 image pull gcr.io/k8s-minikube/busybox: (7.6961806s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-559300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-559300: (37.4329269s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-559300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0308 00:49:37.401716    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 00:49:58.897006    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-559300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m32.9184504s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-559300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-559300 image list: (6.6770568s)
helpers_test.go:175: Cleaning up "test-preload-559300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-559300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-559300: (38.6942788s)
--- PASS: TestPreload (484.05s)

                                                
                                    
x
+
TestScheduledStopWindows (310.96s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-329000 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-329000 --memory=2048 --driver=hyperv: (3m0.6385696s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-329000 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-329000 --schedule 5m: (9.9825765s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-329000 -n scheduled-stop-329000
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-329000 -n scheduled-stop-329000: exit status 1 (10.0237849s)

                                                
                                                
** stderr ** 
	W0308 00:54:22.972967   11224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-329000 -- sudo systemctl show minikube-scheduled-stop --no-page
E0308 00:54:37.407333    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-329000 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.6465093s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-329000 --schedule 5s
E0308 00:54:42.123889    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-329000 --schedule 5s: (9.6643026s)
E0308 00:54:58.898275    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-329000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-329000: exit status 7 (2.106189s)

                                                
                                                
-- stdout --
	scheduled-stop-329000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:55:51.302374    7324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-329000 -n scheduled-stop-329000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-329000 -n scheduled-stop-329000: exit status 7 (2.1112638s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:55:53.416231    2956 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-329000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-329000: (27.7802351s)
--- PASS: TestScheduledStopWindows (310.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (984.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.977678448.exe start -p running-upgrade-642500 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.977678448.exe start -p running-upgrade-642500 --memory=2200 --vm-driver=hyperv: (8m21.1696522s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-642500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0308 01:11:22.139384    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-642500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m58.1327709s)
helpers_test.go:175: Cleaning up "running-upgrade-642500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-642500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-642500: (1m5.0386689s)
--- PASS: TestRunningBinaryUpgrade (984.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (1115.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (7m38.6770009s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-240200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-240200: (34.7042675s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-240200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-240200 status --format={{.Host}}: exit status 7 (2.3381033s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:13:43.574588    9680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m55.4189225s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-240200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (1.5728907s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-240200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16214
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:18:41.588982    7440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-240200
	    minikube start -p kubernetes-upgrade-240200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2402002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-240200 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-240200 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m36.5358711s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-240200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-240200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-240200: (45.5962636s)
--- PASS: TestKubernetesUpgrade (1115.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-463800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-463800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (372.065ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-463800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16214
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 00:56:23.328672    6124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (779.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3045212534.exe start -p stopped-upgrade-299600 --memory=2200 --vm-driver=hyperv
E0308 01:09:58.919388    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3045212534.exe start -p stopped-upgrade-299600 --memory=2200 --vm-driver=hyperv: (6m21.4400627s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3045212534.exe -p stopped-upgrade-299600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3045212534.exe -p stopped-upgrade-299600 stop: (36.4082707s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-299600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-299600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m1.3466047s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (779.21s)

                                                
                                    
x
+
TestPause/serial/Start (392.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-549000 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0308 01:19:20.940324    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 01:19:37.414579    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-723800\client.crt: The system cannot find the path specified.
E0308 01:19:58.923324    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-549000 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (6m32.2292719s)
--- PASS: TestPause/serial/Start (392.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-299600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-299600: (9.1339986s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (479.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-549000 --alsologtostderr -v=1 --driver=hyperv
E0308 01:28:02.159832    8324 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-934300\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-549000 --alsologtostderr -v=1 --driver=hyperv: (7m59.3714852s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (479.40s)

                                                
                                    
x
+
TestPause/serial/Pause (8.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-549000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-549000 --alsologtostderr -v=5: (8.7929911s)
--- PASS: TestPause/serial/Pause (8.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-549000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-549000 --output=json --layout=cluster: exit status 2 (13.8349858s)

                                                
                                                
-- stdout --
	{"Name":"pause-549000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-549000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0308 01:33:49.792724   10520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (13.84s)

                                                
                                    

Test skip (32/216)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-934300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-934300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8328: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-934300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-934300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0566764s)

                                                
                                                
-- stdout --
	* [functional-934300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16214
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:05:35.766132    3608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0307 23:05:35.861597    3608 out.go:291] Setting OutFile to fd 784 ...
	I0307 23:05:35.862337    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:05:35.862507    3608 out.go:304] Setting ErrFile to fd 800...
	I0307 23:05:35.862579    3608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:05:35.885004    3608 out.go:298] Setting JSON to false
	I0307 23:05:35.889966    3608 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11690,"bootTime":1709841045,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 23:05:35.890056    3608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 23:05:35.894775    3608 out.go:177] * [functional-934300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 23:05:35.898131    3608 notify.go:220] Checking for updates...
	I0307 23:05:35.900366    3608 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:05:35.903488    3608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 23:05:35.906306    3608 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 23:05:35.908643    3608 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 23:05:35.911834    3608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 23:05:35.915263    3608 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:05:35.915845    3608 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-934300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-934300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0282653s)

                                                
                                                
-- stdout --
	* [functional-934300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16214
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0307 23:05:30.719987    3088 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0307 23:05:30.807847    3088 out.go:291] Setting OutFile to fd 960 ...
	I0307 23:05:30.808874    3088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:05:30.808874    3088 out.go:304] Setting ErrFile to fd 796...
	I0307 23:05:30.808874    3088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 23:05:30.834601    3088 out.go:298] Setting JSON to false
	I0307 23:05:30.838460    3088 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11684,"bootTime":1709841045,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0307 23:05:30.838460    3088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0307 23:05:30.844150    3088 out.go:177] * [functional-934300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0307 23:05:30.848607    3088 notify.go:220] Checking for updates...
	I0307 23:05:30.852921    3088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0307 23:05:30.855575    3088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 23:05:30.858283    3088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0307 23:05:30.860854    3088 out.go:177]   - MINIKUBE_LOCATION=16214
	I0307 23:05:30.865275    3088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 23:05:30.870151    3088 config.go:182] Loaded profile config "functional-934300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0307 23:05:30.871825    3088 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard